00:00:00.002 Started by upstream project "autotest-per-patch" build number 132357 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.162 Fetching changes from the remote Git repository 00:00:00.164 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.212 Using shallow fetch with depth 1 00:00:00.212 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.212 > git --version # timeout=10 00:00:00.252 > git --version # 'git version 2.39.2' 00:00:00.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.273 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.273 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.645 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.657 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.692 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.692 > git config core.sparsecheckout # timeout=10 00:00:05.705 > git read-tree -mu HEAD # timeout=10 00:00:05.721 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.744 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.744 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.818 [Pipeline] Start of Pipeline 00:00:05.830 [Pipeline] library 00:00:05.831 Loading library shm_lib@master 00:00:05.832 Library shm_lib@master is cached. Copying from home. 00:00:05.849 [Pipeline] node 00:00:05.857 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.859 [Pipeline] { 00:00:05.870 [Pipeline] catchError 00:00:05.871 [Pipeline] { 00:00:05.883 [Pipeline] wrap 00:00:05.892 [Pipeline] { 00:00:05.900 [Pipeline] stage 00:00:05.901 [Pipeline] { (Prologue) 00:00:06.129 [Pipeline] sh 00:00:06.419 + logger -p user.info -t JENKINS-CI 00:00:06.434 [Pipeline] echo 00:00:06.435 Node: CYP9 00:00:06.443 [Pipeline] sh 00:00:06.743 [Pipeline] setCustomBuildProperty 00:00:06.753 [Pipeline] echo 00:00:06.755 Cleanup processes 00:00:06.761 [Pipeline] sh 00:00:07.050 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.050 362239 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.063 [Pipeline] sh 00:00:07.353 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.353 ++ grep -v 'sudo pgrep' 00:00:07.353 ++ awk '{print $1}' 00:00:07.353 + sudo kill -9 00:00:07.353 + true 00:00:07.367 [Pipeline] cleanWs 00:00:07.376 [WS-CLEANUP] Deleting project workspace... 00:00:07.376 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.384 [WS-CLEANUP] done 00:00:07.387 [Pipeline] setCustomBuildProperty 00:00:07.399 [Pipeline] sh 00:00:07.686 + sudo git config --global --replace-all safe.directory '*' 00:00:07.782 [Pipeline] httpRequest 00:00:08.125 [Pipeline] echo 00:00:08.127 Sorcerer 10.211.164.20 is alive 00:00:08.135 [Pipeline] retry 00:00:08.137 [Pipeline] { 00:00:08.153 [Pipeline] httpRequest 00:00:08.157 HttpMethod: GET 00:00:08.157 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.158 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.179 Response Code: HTTP/1.1 200 OK 00:00:08.180 Success: Status code 200 is in the accepted range: 200,404 00:00:08.180 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.225 [Pipeline] } 00:00:13.243 [Pipeline] // retry 00:00:13.252 [Pipeline] sh 00:00:13.544 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.561 [Pipeline] httpRequest 00:00:13.923 [Pipeline] echo 00:00:13.924 Sorcerer 10.211.164.20 is alive 00:00:13.930 [Pipeline] retry 00:00:13.931 [Pipeline] { 00:00:13.938 [Pipeline] httpRequest 00:00:13.941 HttpMethod: GET 00:00:13.942 URL: http://10.211.164.20/packages/spdk_17ebaf46feade46b375a6932fdab7abbf80370f3.tar.gz 00:00:13.942 Sending request to url: http://10.211.164.20/packages/spdk_17ebaf46feade46b375a6932fdab7abbf80370f3.tar.gz 00:00:13.964 Response Code: HTTP/1.1 200 OK 00:00:13.964 Success: Status code 200 is in the accepted range: 200,404 00:00:13.964 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_17ebaf46feade46b375a6932fdab7abbf80370f3.tar.gz 00:00:48.615 [Pipeline] } 00:00:48.633 [Pipeline] // retry 00:00:48.641 [Pipeline] sh 00:00:48.931 + tar --no-same-owner -xf spdk_17ebaf46feade46b375a6932fdab7abbf80370f3.tar.gz 00:00:52.253 [Pipeline] sh 00:00:52.540 + git -C spdk log --oneline -n5 00:00:52.540 17ebaf46f test/packaging: Remove rpath workarounds in tests 00:00:52.540 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:00:52.540 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:00:52.540 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:00:52.540 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:00:52.552 [Pipeline] } 00:00:52.566 [Pipeline] // stage 00:00:52.576 [Pipeline] stage 00:00:52.578 [Pipeline] { (Prepare) 00:00:52.596 [Pipeline] writeFile 00:00:52.613 [Pipeline] sh 00:00:52.904 + logger -p user.info -t JENKINS-CI 00:00:52.918 [Pipeline] sh 00:00:53.205 + logger -p user.info -t JENKINS-CI 00:00:53.219 [Pipeline] sh 00:00:53.510 + cat autorun-spdk.conf 00:00:53.510 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.510 SPDK_TEST_NVMF=1 00:00:53.510 SPDK_TEST_NVME_CLI=1 00:00:53.510 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.510 SPDK_TEST_NVMF_NICS=e810 00:00:53.510 SPDK_TEST_VFIOUSER=1 00:00:53.510 SPDK_RUN_UBSAN=1 00:00:53.510 NET_TYPE=phy 00:00:53.518 RUN_NIGHTLY=0 00:00:53.524 [Pipeline] readFile 00:00:53.550 [Pipeline] withEnv 00:00:53.551 [Pipeline] { 00:00:53.565 [Pipeline] sh 00:00:53.873 + set -ex 00:00:53.874 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:53.874 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:53.874 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.874 ++ SPDK_TEST_NVMF=1 00:00:53.874 ++ SPDK_TEST_NVME_CLI=1 00:00:53.874 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.874 ++ SPDK_TEST_NVMF_NICS=e810 00:00:53.874 ++ SPDK_TEST_VFIOUSER=1 00:00:53.874 ++ SPDK_RUN_UBSAN=1 00:00:53.874 ++ NET_TYPE=phy 00:00:53.874 ++ RUN_NIGHTLY=0 00:00:53.874 + case $SPDK_TEST_NVMF_NICS in 00:00:53.874 + DRIVERS=ice 00:00:53.874 + [[ tcp == \r\d\m\a ]] 00:00:53.874 + [[ -n ice ]] 00:00:53.874 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:53.874 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:53.875 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:53.875 rmmod: ERROR: Module irdma is not currently loaded 00:00:53.875 rmmod: ERROR: Module i40iw is not currently loaded 00:00:53.875 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:53.875 + true 00:00:53.875 + for D in $DRIVERS 00:00:53.875 + sudo modprobe ice 00:00:53.875 + exit 0 00:00:53.887 [Pipeline] } 00:00:53.896 [Pipeline] // withEnv 00:00:53.900 [Pipeline] } 00:00:53.908 [Pipeline] // stage 00:00:53.914 [Pipeline] catchError 00:00:53.915 [Pipeline] { 00:00:53.924 [Pipeline] timeout 00:00:53.925 Timeout set to expire in 1 hr 0 min 00:00:53.926 [Pipeline] { 00:00:53.937 [Pipeline] stage 00:00:53.939 [Pipeline] { (Tests) 00:00:53.951 [Pipeline] sh 00:00:54.238 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.238 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.238 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.238 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:54.238 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:54.238 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:54.238 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:54.238 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:54.238 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:54.238 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:54.238 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:54.238 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.238 + source /etc/os-release 00:00:54.238 ++ NAME='Fedora Linux' 00:00:54.238 ++ VERSION='39 (Cloud Edition)' 00:00:54.238 ++ ID=fedora 00:00:54.238 ++ VERSION_ID=39 00:00:54.238 ++ VERSION_CODENAME= 00:00:54.238 ++ PLATFORM_ID=platform:f39 00:00:54.238 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:54.238 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:54.238 ++ LOGO=fedora-logo-icon 00:00:54.238 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:54.238 ++ HOME_URL=https://fedoraproject.org/ 00:00:54.238 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:54.238 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:54.238 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:54.238 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:54.238 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:54.238 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:54.238 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:54.238 ++ SUPPORT_END=2024-11-12 00:00:54.238 ++ VARIANT='Cloud Edition' 00:00:54.238 ++ VARIANT_ID=cloud 00:00:54.238 + uname -a 00:00:54.238 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:54.238 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:57.536 Hugepages 00:00:57.536 node hugesize free / total 00:00:57.536 node0 1048576kB 0 / 0 00:00:57.536 node0 2048kB 0 / 0 00:00:57.536 node1 1048576kB 0 / 0 00:00:57.536 node1 2048kB 0 / 0 00:00:57.536 00:00:57.536 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:57.536 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:57.536 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:57.536 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:57.536 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:57.536 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:57.536 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:57.536 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:57.536 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:57.536 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:57.536 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:57.536 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:57.536 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:57.536 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:57.536 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:57.536 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:57.536 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:57.536 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:57.536 + rm -f /tmp/spdk-ld-path 00:00:57.536 + source autorun-spdk.conf 00:00:57.536 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.536 ++ SPDK_TEST_NVMF=1 00:00:57.536 ++ SPDK_TEST_NVME_CLI=1 00:00:57.536 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.536 ++ SPDK_TEST_NVMF_NICS=e810 00:00:57.536 ++ SPDK_TEST_VFIOUSER=1 00:00:57.536 ++ SPDK_RUN_UBSAN=1 00:00:57.536 ++ NET_TYPE=phy 00:00:57.536 ++ RUN_NIGHTLY=0 00:00:57.536 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:57.536 + [[ -n '' ]] 00:00:57.536 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:57.536 + for M in /var/spdk/build-*-manifest.txt 00:00:57.536 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:57.536 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:57.536 + for M in /var/spdk/build-*-manifest.txt 00:00:57.536 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:57.536 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:57.536 + for M in /var/spdk/build-*-manifest.txt 00:00:57.536 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:57.536 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:57.536 ++ uname 00:00:57.536 + [[ Linux == \L\i\n\u\x ]] 00:00:57.536 + sudo dmesg -T 00:00:57.536 + sudo dmesg --clear 00:00:57.536 + dmesg_pid=363215 00:00:57.536 + [[ Fedora Linux == FreeBSD ]] 00:00:57.536 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:57.536 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:57.536 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:57.536 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:57.536 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:57.536 + [[ -x /usr/src/fio-static/fio ]] 00:00:57.536 + export FIO_BIN=/usr/src/fio-static/fio 00:00:57.536 + FIO_BIN=/usr/src/fio-static/fio 00:00:57.536 + sudo dmesg -Tw 00:00:57.536 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:57.536 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:57.536 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:57.537 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:57.537 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:57.537 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:57.537 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:57.537 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:57.537 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.798 08:46:23 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:57.798 08:46:23 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.798 08:46:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.798 08:46:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:57.798 08:46:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:57.798 08:46:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.798 08:46:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:57.798 08:46:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:57.798 08:46:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:57.798 08:46:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:57.798 08:46:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:57.798 08:46:23 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:57.798 08:46:23 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.798 08:46:23 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:57.798 08:46:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:57.798 08:46:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:57.798 08:46:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:57.798 08:46:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:57.798 08:46:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:57.798 08:46:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.798 08:46:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.798 08:46:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.798 08:46:23 -- paths/export.sh@5 -- $ export PATH 00:00:57.798 08:46:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.798 08:46:23 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:57.798 08:46:23 -- common/autobuild_common.sh@493 -- $ date +%s 00:00:57.798 08:46:23 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732088783.XXXXXX 00:00:57.798 08:46:23 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732088783.j874qx 00:00:57.798 08:46:23 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:00:57.798 08:46:23 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:00:57.798 08:46:23 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:57.798 08:46:23 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:57.798 08:46:23 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:57.798 08:46:23 -- common/autobuild_common.sh@509 -- $ get_config_params 00:00:57.798 08:46:23 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:57.798 08:46:23 -- common/autotest_common.sh@10 -- $ set +x 00:00:57.799 08:46:23 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:57.799 08:46:23 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:00:57.799 08:46:23 -- pm/common@17 -- $ local monitor 00:00:57.799 08:46:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.799 08:46:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.799 08:46:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.799 08:46:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.799 08:46:23 -- pm/common@21 -- $ date +%s 00:00:57.799 08:46:23 -- pm/common@21 -- $ date +%s 00:00:57.799 08:46:23 -- pm/common@25 -- $ sleep 1 00:00:57.799 08:46:23 -- pm/common@21 -- $ date +%s 00:00:57.799 08:46:23 -- pm/common@21 -- $ date +%s 00:00:57.799 08:46:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732088783 00:00:57.799 08:46:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732088783 00:00:57.799 08:46:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732088783 00:00:57.799 08:46:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732088783 00:00:57.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732088783_collect-cpu-load.pm.log 00:00:57.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732088783_collect-vmstat.pm.log 00:00:57.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732088783_collect-cpu-temp.pm.log 00:00:57.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732088783_collect-bmc-pm.bmc.pm.log 00:00:58.743 08:46:24 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:00:58.743 08:46:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:58.743 08:46:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:58.743 08:46:24 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.743 08:46:24 -- spdk/autobuild.sh@16 -- $ date -u 00:00:58.743 Wed Nov 20 07:46:24 AM UTC 2024 00:00:58.743 08:46:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:58.743 v25.01-pre-200-g17ebaf46f 00:00:58.743 08:46:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:58.743 08:46:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:58.743 08:46:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:58.743 08:46:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:58.743 08:46:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:58.743 08:46:24 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.005 ************************************ 00:00:59.005 START TEST ubsan 00:00:59.005 ************************************ 00:00:59.005 08:46:24 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:59.005 using ubsan 00:00:59.005 00:00:59.005 real 0m0.001s 00:00:59.005 user 0m0.000s 00:00:59.005 sys 0m0.000s 00:00:59.005 08:46:24 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:59.005 08:46:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:59.005 ************************************ 00:00:59.005 END TEST ubsan 00:00:59.005 ************************************ 00:00:59.005 08:46:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:59.005 08:46:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:59.005 08:46:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:59.005 08:46:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:59.005 08:46:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:59.005 08:46:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:59.005 08:46:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:59.005 08:46:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:59.005 08:46:24 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:59.005 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:59.005 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:59.577 Using 'verbs' RDMA provider 00:01:15.126 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:30.035 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:30.035 Creating mk/config.mk...done. 00:01:30.035 Creating mk/cc.flags.mk...done. 00:01:30.035 Type 'make' to build. 00:01:30.035 08:46:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:30.035 08:46:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:30.035 08:46:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:30.035 08:46:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.035 ************************************ 00:01:30.035 START TEST make 00:01:30.035 ************************************ 00:01:30.035 08:46:53 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:30.035 make[1]: Nothing to be done for 'all'. 00:01:30.296 The Meson build system 00:01:30.296 Version: 1.5.0 00:01:30.296 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:30.296 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.296 Build type: native build 00:01:30.296 Project name: libvfio-user 00:01:30.296 Project version: 0.0.1 00:01:30.296 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:30.296 C linker for the host machine: cc ld.bfd 2.40-14 00:01:30.296 Host machine cpu family: x86_64 00:01:30.296 Host machine cpu: x86_64 00:01:30.296 Run-time dependency threads found: YES 00:01:30.296 Library dl found: YES 00:01:30.296 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:30.296 Run-time dependency json-c found: YES 0.17 00:01:30.296 Run-time dependency cmocka found: YES 1.1.7 00:01:30.296 Program pytest-3 found: NO 00:01:30.296 Program flake8 found: NO 00:01:30.296 Program misspell-fixer found: NO 00:01:30.296 Program restructuredtext-lint found: NO 00:01:30.296 Program valgrind found: YES (/usr/bin/valgrind) 00:01:30.296 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.296 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.296 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.296 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:30.296 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:30.296 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:30.296 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:30.296 Build targets in project: 8 00:01:30.296 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:30.296 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:30.296 00:01:30.296 libvfio-user 0.0.1 00:01:30.296 00:01:30.296 User defined options 00:01:30.296 buildtype : debug 00:01:30.296 default_library: shared 00:01:30.296 libdir : /usr/local/lib 00:01:30.296 00:01:30.296 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.870 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:30.870 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:30.870 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:30.870 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:30.870 [4/37] Compiling C object samples/null.p/null.c.o 00:01:30.870 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:30.870 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:30.870 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:30.870 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:30.870 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:30.870 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:30.870 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:30.870 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:30.870 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:30.870 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:30.870 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:30.870 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:30.870 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:30.870 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:30.870 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:30.870 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:30.870 [21/37] Compiling C object samples/server.p/server.c.o 00:01:30.870 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:30.870 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:30.870 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:30.870 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:30.870 [26/37] Compiling C object samples/client.p/client.c.o 00:01:30.870 [27/37] Linking target samples/client 00:01:30.870 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:30.870 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:31.131 [30/37] Linking target test/unit_tests 00:01:31.131 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:31.131 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:31.131 [33/37] Linking target samples/server 00:01:31.131 [34/37] Linking target samples/gpio-pci-idio-16 00:01:31.131 [35/37] Linking target samples/null 00:01:31.131 [36/37] Linking target samples/lspci 00:01:31.131 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:31.131 INFO: autodetecting backend as ninja 00:01:31.131 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.393 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.654 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.654 ninja: no work to do. 00:01:38.253 The Meson build system 00:01:38.253 Version: 1.5.0 00:01:38.253 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:38.253 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:38.253 Build type: native build 00:01:38.253 Program cat found: YES (/usr/bin/cat) 00:01:38.253 Project name: DPDK 00:01:38.253 Project version: 24.03.0 00:01:38.253 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:38.253 C linker for the host machine: cc ld.bfd 2.40-14 00:01:38.253 Host machine cpu family: x86_64 00:01:38.253 Host machine cpu: x86_64 00:01:38.253 Message: ## Building in Developer Mode ## 00:01:38.253 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:38.253 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:38.253 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:38.253 Program python3 found: YES (/usr/bin/python3) 00:01:38.253 Program cat found: YES (/usr/bin/cat) 00:01:38.253 Compiler for C supports arguments -march=native: YES 00:01:38.253 Checking for size of "void *" : 8 00:01:38.253 Checking for size of "void *" : 8 (cached) 00:01:38.253 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:38.253 Library m found: YES 00:01:38.253 Library numa found: YES 00:01:38.253 Has header "numaif.h" : YES 00:01:38.253 Library fdt found: NO 00:01:38.253 Library execinfo found: NO 00:01:38.253 Has header "execinfo.h" : YES 00:01:38.253 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:38.253 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:38.253 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:38.253 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:38.253 Run-time dependency openssl found: YES 3.1.1 00:01:38.253 Run-time dependency libpcap found: YES 1.10.4 00:01:38.253 Has header "pcap.h" with dependency libpcap: YES 00:01:38.253 Compiler for C supports arguments -Wcast-qual: YES 00:01:38.253 Compiler for C supports arguments -Wdeprecated: YES 00:01:38.253 Compiler for C supports arguments -Wformat: YES 00:01:38.253 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:38.253 Compiler for C supports arguments -Wformat-security: NO 00:01:38.253 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:38.253 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:38.253 Compiler for C supports arguments -Wnested-externs: YES 00:01:38.253 Compiler for C supports arguments -Wold-style-definition: YES 00:01:38.253 Compiler for C supports arguments -Wpointer-arith: YES 00:01:38.253 Compiler for C supports arguments -Wsign-compare: YES 00:01:38.253 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:38.253 Compiler for C supports arguments -Wundef: YES 00:01:38.253 Compiler for C supports arguments -Wwrite-strings: YES 00:01:38.253 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:38.253 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:38.253 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:38.253 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:38.253 Program objdump found: YES (/usr/bin/objdump) 00:01:38.253 Compiler for C supports arguments -mavx512f: YES 00:01:38.253 Checking if "AVX512 checking" compiles: YES 00:01:38.253 Fetching value of define "__SSE4_2__" : 1 00:01:38.253 Fetching value of define "__AES__" : 1 00:01:38.253 Fetching value of define "__AVX__" : 1 00:01:38.253 Fetching value of define "__AVX2__" : 1 00:01:38.253 Fetching value of define "__AVX512BW__" : 1 00:01:38.253 Fetching value of define "__AVX512CD__" : 1 00:01:38.253 Fetching value of define "__AVX512DQ__" : 1 00:01:38.253 Fetching value of define "__AVX512F__" : 1 00:01:38.253 Fetching value of define "__AVX512VL__" : 1 00:01:38.253 Fetching value of define "__PCLMUL__" : 1 00:01:38.253 Fetching value of define "__RDRND__" : 1 00:01:38.253 Fetching value of define "__RDSEED__" : 1 00:01:38.253 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:38.253 Fetching value of define "__znver1__" : (undefined) 00:01:38.253 Fetching value of define "__znver2__" : (undefined) 00:01:38.253 Fetching value of define "__znver3__" : (undefined) 00:01:38.253 Fetching value of define "__znver4__" : (undefined) 00:01:38.253 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:38.253 Message: lib/log: Defining dependency "log" 00:01:38.253 Message: lib/kvargs: Defining dependency "kvargs" 00:01:38.253 Message: lib/telemetry: Defining dependency "telemetry" 00:01:38.253 Checking for function "getentropy" : NO 00:01:38.253 Message: lib/eal: Defining dependency "eal" 00:01:38.253 Message: lib/ring: Defining dependency "ring" 00:01:38.253 Message: lib/rcu: Defining dependency "rcu" 00:01:38.253 Message: lib/mempool: Defining dependency "mempool" 00:01:38.253 Message: lib/mbuf: Defining dependency "mbuf" 00:01:38.253 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:38.253 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:38.253 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:38.253 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:38.253 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:38.253 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:38.253 Compiler for C supports arguments -mpclmul: YES 00:01:38.253 Compiler for C supports arguments -maes: YES 00:01:38.253 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:38.253 Compiler for C supports arguments -mavx512bw: YES 00:01:38.253 Compiler for C supports arguments -mavx512dq: YES 00:01:38.253 Compiler for C supports arguments -mavx512vl: YES 00:01:38.253 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:38.253 Compiler for C supports arguments -mavx2: YES 00:01:38.253 Compiler for C supports arguments -mavx: YES 00:01:38.253 Message: lib/net: Defining dependency "net" 00:01:38.253 Message: lib/meter: Defining dependency "meter" 00:01:38.253 Message: lib/ethdev: Defining dependency "ethdev" 00:01:38.253 Message: lib/pci: Defining dependency "pci" 00:01:38.253 Message: lib/cmdline: Defining dependency "cmdline" 00:01:38.253 Message: lib/hash: Defining dependency "hash" 00:01:38.253 Message: lib/timer: Defining dependency "timer" 00:01:38.253 Message: lib/compressdev: Defining dependency "compressdev" 00:01:38.253 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:38.253 Message: lib/dmadev: Defining dependency "dmadev" 00:01:38.253 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:38.253 Message: lib/power: Defining dependency "power" 00:01:38.253 Message: lib/reorder: Defining dependency "reorder" 00:01:38.253 Message: lib/security: Defining dependency "security" 00:01:38.253 Has header "linux/userfaultfd.h" : YES 00:01:38.253 Has header "linux/vduse.h" : YES 00:01:38.253 Message: lib/vhost: Defining dependency "vhost" 00:01:38.253 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:38.253 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:38.253 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:38.253 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:38.253 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:38.253 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:38.253 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:38.253 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:38.253 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:38.253 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:38.253 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:38.253 Configuring doxy-api-html.conf using configuration 00:01:38.253 Configuring doxy-api-man.conf using configuration 00:01:38.253 Program mandb found: YES (/usr/bin/mandb) 00:01:38.253 Program sphinx-build found: NO 00:01:38.253 Configuring rte_build_config.h using configuration 00:01:38.253 Message: 00:01:38.253 ================= 00:01:38.253 Applications Enabled 00:01:38.253 ================= 00:01:38.253 00:01:38.253 apps: 00:01:38.253 00:01:38.253 00:01:38.253 Message: 00:01:38.253 ================= 00:01:38.253 Libraries Enabled 00:01:38.253 ================= 00:01:38.253 00:01:38.253 libs: 00:01:38.253 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:38.253 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:38.253 cryptodev, dmadev, power, reorder, security, vhost, 00:01:38.253 00:01:38.253 Message: 00:01:38.253 =============== 00:01:38.253 Drivers Enabled 00:01:38.253 =============== 00:01:38.253 00:01:38.253 common: 00:01:38.253 00:01:38.253 bus: 00:01:38.253 pci, vdev, 00:01:38.253 mempool: 00:01:38.253 ring, 00:01:38.253 dma: 00:01:38.253 00:01:38.253 net: 00:01:38.253 00:01:38.253 crypto: 00:01:38.253 00:01:38.253 compress: 00:01:38.253 00:01:38.253 vdpa: 00:01:38.253 00:01:38.253 00:01:38.253 Message: 00:01:38.253 ================= 00:01:38.253 Content Skipped 00:01:38.253 ================= 00:01:38.253 00:01:38.253 apps: 00:01:38.253 dumpcap: explicitly disabled via build config 00:01:38.253 graph: explicitly disabled via build config 00:01:38.253 pdump: explicitly disabled via build config 00:01:38.253 proc-info: explicitly disabled via build config 00:01:38.253 test-acl: explicitly disabled via build config 00:01:38.253 test-bbdev: explicitly disabled via build config 00:01:38.253 test-cmdline: explicitly disabled via build config 00:01:38.253 test-compress-perf: explicitly disabled via build config 00:01:38.253 test-crypto-perf: explicitly disabled via build config 00:01:38.253 test-dma-perf: explicitly disabled via build config 00:01:38.253 test-eventdev: explicitly disabled via build config 00:01:38.253 test-fib: explicitly disabled via build config 00:01:38.254 test-flow-perf: explicitly disabled via build config 00:01:38.254 test-gpudev: explicitly disabled via build config 00:01:38.254 test-mldev: explicitly disabled via build config 00:01:38.254 test-pipeline: explicitly disabled via build config 00:01:38.254 test-pmd: explicitly disabled via build config 00:01:38.254 test-regex: explicitly disabled via build config 00:01:38.254 test-sad: explicitly disabled via build config 00:01:38.254 test-security-perf: explicitly disabled via build config 00:01:38.254 00:01:38.254 libs: 00:01:38.254 argparse: explicitly disabled via build config 00:01:38.254 metrics: explicitly disabled via build config 00:01:38.254 acl: explicitly disabled via build config 00:01:38.254 bbdev: explicitly disabled via build config 00:01:38.254 bitratestats: explicitly disabled via build config 00:01:38.254 bpf: explicitly disabled via build config 00:01:38.254 cfgfile: explicitly disabled via build config 00:01:38.254 distributor: explicitly disabled via build config 00:01:38.254 efd: explicitly disabled via build config 00:01:38.254 eventdev: explicitly disabled via build config 00:01:38.254 dispatcher: explicitly disabled via build config 00:01:38.254 gpudev: explicitly disabled via build config 00:01:38.254 gro: explicitly disabled via build config 00:01:38.254 gso: explicitly disabled via build config 00:01:38.254 ip_frag: explicitly disabled via build config 00:01:38.254 jobstats: explicitly disabled via build config 00:01:38.254 latencystats: explicitly disabled via build config 00:01:38.254 lpm: explicitly disabled via build config 00:01:38.254 member: explicitly disabled via build config 00:01:38.254 pcapng: explicitly disabled via build config 00:01:38.254 rawdev: explicitly disabled via build config 00:01:38.254 regexdev: explicitly disabled via build config 00:01:38.254 mldev: explicitly disabled via build config 00:01:38.254 rib: explicitly disabled via build config 00:01:38.254 sched: explicitly disabled via build config 00:01:38.254 stack: explicitly disabled via build config 00:01:38.254 ipsec: explicitly disabled via build config 00:01:38.254 pdcp: explicitly disabled via build config 00:01:38.254 fib: explicitly disabled via build config 00:01:38.254 port: explicitly disabled via build config 00:01:38.254 pdump: explicitly disabled via build config 00:01:38.254 table: explicitly disabled via build config 00:01:38.254 pipeline: explicitly disabled via build config 00:01:38.254 graph: explicitly disabled via build config 00:01:38.254 node: explicitly disabled via build config 00:01:38.254 00:01:38.254 drivers: 00:01:38.254 common/cpt: not in enabled drivers build config 00:01:38.254 common/dpaax: not in enabled drivers build config 00:01:38.254 common/iavf: not in enabled drivers build config 00:01:38.254 common/idpf: not in enabled drivers build config 00:01:38.254 common/ionic: not in enabled drivers build config 00:01:38.254 common/mvep: not in enabled drivers build config 00:01:38.254 common/octeontx: not in enabled drivers build config 00:01:38.254 bus/auxiliary: not in enabled drivers build config 00:01:38.254 bus/cdx: not in enabled drivers build config 00:01:38.254 bus/dpaa: not in enabled drivers build config 00:01:38.254 bus/fslmc: not in enabled drivers build config 00:01:38.254 bus/ifpga: not in enabled drivers build config 00:01:38.254 bus/platform: not in enabled drivers build config 00:01:38.254 bus/uacce: not in enabled drivers build config 00:01:38.254 bus/vmbus: not in enabled drivers build config 00:01:38.254 common/cnxk: not in enabled drivers build config 00:01:38.254 common/mlx5: not in enabled drivers build config 00:01:38.254 common/nfp: not in enabled drivers build config 00:01:38.254 common/nitrox: not in enabled drivers build config 00:01:38.254 common/qat: not in enabled drivers build config 00:01:38.254 common/sfc_efx: not in enabled drivers build config 00:01:38.254 mempool/bucket: not in enabled drivers build config 00:01:38.254 mempool/cnxk: not in enabled drivers build config 00:01:38.254 mempool/dpaa: not in enabled drivers build config 00:01:38.254 mempool/dpaa2: not in enabled drivers build config 00:01:38.254 mempool/octeontx: not in enabled drivers build config 00:01:38.254 mempool/stack: not in enabled drivers build config 00:01:38.254 dma/cnxk: not in enabled drivers build config 00:01:38.254 dma/dpaa: not in enabled drivers build config 00:01:38.254 dma/dpaa2: not in enabled drivers build config 00:01:38.254 dma/hisilicon: not in enabled drivers build config 00:01:38.254 dma/idxd: not in enabled drivers build config 00:01:38.254 dma/ioat: not in enabled drivers build config 00:01:38.254 dma/skeleton: not in enabled drivers build config 00:01:38.254 net/af_packet: not in enabled drivers build config 00:01:38.254 net/af_xdp: not in enabled drivers build config 00:01:38.254 net/ark: not in enabled drivers build config 00:01:38.254 net/atlantic: not in enabled drivers build config 00:01:38.254 net/avp: not in enabled drivers build config 00:01:38.254 net/axgbe: not in enabled drivers build config 00:01:38.254 net/bnx2x: not in enabled drivers build config 00:01:38.254 net/bnxt: not in enabled drivers build config 00:01:38.254 net/bonding: not in enabled drivers build config 00:01:38.254 net/cnxk: not in enabled drivers build config 00:01:38.254 net/cpfl: not in enabled drivers build config 00:01:38.254 net/cxgbe: not in enabled drivers build config 00:01:38.254 net/dpaa: not in enabled drivers build config 00:01:38.254 net/dpaa2: not in enabled drivers build config 00:01:38.254 net/e1000: not in enabled drivers build config 00:01:38.254 net/ena: not in enabled drivers build config 00:01:38.254 net/enetc: not in enabled drivers build config 00:01:38.254 net/enetfec: not in enabled drivers build config 00:01:38.254 net/enic: not in enabled drivers build config 00:01:38.254 net/failsafe: not in enabled drivers build config 00:01:38.254 net/fm10k: not in enabled drivers build config 00:01:38.254 net/gve: not in enabled drivers build config 00:01:38.254 net/hinic: not in enabled drivers build config 00:01:38.254 net/hns3: not in enabled drivers build config 00:01:38.254 net/i40e: not in enabled drivers build config 00:01:38.254 net/iavf: not in enabled drivers build config 00:01:38.254 net/ice: not in enabled drivers build config 00:01:38.254 net/idpf: not in enabled drivers build config 00:01:38.254 net/igc: not in enabled drivers build config 00:01:38.254 net/ionic: not in enabled drivers build config 00:01:38.254 net/ipn3ke: not in enabled drivers build config 00:01:38.254 net/ixgbe: not in enabled drivers build config 00:01:38.254 net/mana: not in enabled drivers build config 00:01:38.254 net/memif: not in enabled drivers build config 00:01:38.254 net/mlx4: not in enabled drivers build config 00:01:38.254 net/mlx5: not in enabled drivers build config 00:01:38.254 net/mvneta: not in enabled drivers build config 00:01:38.254 net/mvpp2: not in enabled drivers build config 00:01:38.254 net/netvsc: not in enabled drivers build config 00:01:38.254 net/nfb: not in enabled drivers build config 00:01:38.254 net/nfp: not in enabled drivers build config 00:01:38.254 net/ngbe: not in enabled drivers build config 00:01:38.254 net/null: not in enabled drivers build config 00:01:38.254 net/octeontx: not in enabled drivers build config 00:01:38.254 net/octeon_ep: not in enabled drivers build config 00:01:38.254 net/pcap: not in enabled drivers build config 00:01:38.254 net/pfe: not in enabled drivers build config 00:01:38.254 net/qede: not in enabled drivers build config 00:01:38.254 net/ring: not in enabled drivers build config 00:01:38.254 net/sfc: not in enabled drivers build config 00:01:38.254 net/softnic: not in enabled drivers build config 00:01:38.254 net/tap: not in enabled drivers build config 00:01:38.254 net/thunderx: not in enabled drivers build config 00:01:38.254 net/txgbe: not in enabled drivers build config 00:01:38.254 net/vdev_netvsc: not in enabled drivers build config 00:01:38.254 net/vhost: not in enabled drivers build config 00:01:38.254 net/virtio: not in enabled drivers build config 00:01:38.254 net/vmxnet3: not in enabled drivers build config 00:01:38.254 raw/*: missing internal dependency, "rawdev" 00:01:38.254 crypto/armv8: not in enabled drivers build config 00:01:38.254 crypto/bcmfs: not in enabled drivers build config 00:01:38.254 crypto/caam_jr: not in enabled drivers build config 00:01:38.254 crypto/ccp: not in enabled drivers build config 00:01:38.254 crypto/cnxk: not in enabled drivers build config 00:01:38.254 crypto/dpaa_sec: not in enabled drivers build config 00:01:38.254 crypto/dpaa2_sec: not in enabled drivers build config 00:01:38.254 crypto/ipsec_mb: not in enabled drivers build config 00:01:38.254 crypto/mlx5: not in enabled drivers build config 00:01:38.254 crypto/mvsam: not in enabled drivers build config 00:01:38.254 crypto/nitrox: not in enabled drivers build config 00:01:38.254 crypto/null: not in enabled drivers build config 00:01:38.254 crypto/octeontx: not in enabled drivers build config 00:01:38.254 crypto/openssl: not in enabled drivers build config 00:01:38.254 crypto/scheduler: not in enabled drivers build config 00:01:38.254 crypto/uadk: not in enabled drivers build config 00:01:38.254 crypto/virtio: not in enabled drivers build config 00:01:38.254 compress/isal: not in enabled drivers build config 00:01:38.254 compress/mlx5: not in enabled drivers build config 00:01:38.254 compress/nitrox: not in enabled drivers build config 00:01:38.254 compress/octeontx: not in enabled drivers build config 00:01:38.254 compress/zlib: not in enabled drivers build config 00:01:38.254 regex/*: missing internal dependency, "regexdev" 00:01:38.254 ml/*: missing internal dependency, "mldev" 00:01:38.254 vdpa/ifc: not in enabled drivers build config 00:01:38.254 vdpa/mlx5: not in enabled drivers build config 00:01:38.254 vdpa/nfp: not in enabled drivers build config 00:01:38.254 vdpa/sfc: not in enabled drivers build config 00:01:38.254 event/*: missing internal dependency, "eventdev" 00:01:38.254 baseband/*: missing internal dependency, "bbdev" 00:01:38.254 gpu/*: missing internal dependency, "gpudev" 00:01:38.254 00:01:38.254 00:01:38.254 Build targets in project: 84 00:01:38.254 00:01:38.254 DPDK 24.03.0 00:01:38.254 00:01:38.254 User defined options 00:01:38.254 buildtype : debug 00:01:38.254 default_library : shared 00:01:38.254 libdir : lib 00:01:38.254 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:38.254 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:38.254 c_link_args : 00:01:38.254 cpu_instruction_set: native 00:01:38.255 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:38.255 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:38.255 enable_docs : false 00:01:38.255 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:38.255 enable_kmods : false 00:01:38.255 max_lcores : 128 00:01:38.255 tests : false 00:01:38.255 00:01:38.255 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.255 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:38.255 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:38.255 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:38.255 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:38.255 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:38.255 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:38.255 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:38.255 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:38.255 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.255 [9/267] Linking static target lib/librte_kvargs.a 00:01:38.255 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.255 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.255 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.255 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:38.255 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.255 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.255 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.255 [17/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:38.255 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.255 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.255 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.514 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.514 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.514 [23/267] Linking static target lib/librte_log.a 00:01:38.514 [24/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:38.514 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.514 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.514 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:38.514 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.514 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.514 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.514 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:38.514 [32/267] Linking static target lib/librte_pci.a 00:01:38.514 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.514 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.514 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.514 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.514 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:38.514 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:38.775 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.775 [40/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:38.775 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:38.775 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.775 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.775 [44/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.775 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:38.775 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:38.775 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:38.775 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:38.775 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.775 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:38.775 [51/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:38.775 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.775 [53/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:38.775 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.775 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:38.775 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.775 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:38.775 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:38.775 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:38.775 [60/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:38.775 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.775 [62/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:38.776 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.776 [64/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.776 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.776 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:38.776 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.776 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.776 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.776 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.776 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.776 [72/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.776 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:38.776 [74/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:38.776 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:38.776 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.776 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:38.776 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.776 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.776 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.776 [81/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:38.776 [82/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:38.776 [83/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:38.776 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.776 [85/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:38.776 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.776 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:38.776 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.776 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.776 [90/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:38.776 [91/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.776 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.776 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.776 [94/267] Linking static target lib/librte_meter.a 00:01:38.776 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:38.776 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:38.776 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.776 [98/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:38.776 [99/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.776 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:38.776 [101/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.776 [102/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.776 [103/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:38.776 [104/267] Linking static target lib/librte_ring.a 00:01:38.776 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:38.776 [106/267] Linking static target lib/librte_telemetry.a 00:01:38.776 [107/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:38.776 [108/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:38.776 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.776 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:38.776 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:38.776 [112/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:38.776 [113/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:38.776 [114/267] Linking static target lib/librte_timer.a 00:01:38.776 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:38.776 [116/267] Linking static target lib/librte_cmdline.a 00:01:38.776 [117/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.776 [118/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:38.776 [119/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:39.037 [120/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:39.037 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:39.037 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:39.037 [123/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.037 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:39.037 [125/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:39.037 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:39.037 [127/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.037 [128/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.037 [129/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.037 [130/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.037 [131/267] Linking static target lib/librte_rcu.a 00:01:39.037 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.037 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.037 [134/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.037 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:39.037 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:39.037 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.037 [138/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.037 [139/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.037 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.037 [141/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.037 [142/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:39.037 [143/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.037 [144/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:39.037 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:39.037 [146/267] Linking static target lib/librte_reorder.a 00:01:39.037 [147/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:39.037 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:39.037 [149/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:39.037 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:39.037 [151/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:39.037 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:39.037 [153/267] Linking static target lib/librte_compressdev.a 00:01:39.037 [154/267] Linking static target lib/librte_net.a 00:01:39.037 [155/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:39.037 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:39.037 [157/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.037 [158/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:39.037 [159/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:39.037 [160/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:39.037 [161/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.037 [162/267] Linking static target lib/librte_dmadev.a 00:01:39.037 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.037 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:39.037 [165/267] Linking static target lib/librte_mempool.a 00:01:39.037 [166/267] Linking static target lib/librte_power.a 00:01:39.038 [167/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:39.038 [168/267] Linking target lib/librte_log.so.24.1 00:01:39.038 [169/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:39.038 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.038 [171/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:39.038 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.038 [173/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:39.038 [174/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:39.038 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.038 [176/267] Linking static target lib/librte_eal.a 00:01:39.038 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:39.038 [178/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:39.038 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.038 [180/267] Linking static target lib/librte_security.a 00:01:39.038 [181/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.299 [182/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:39.299 [183/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:39.299 [184/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.299 [185/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:39.299 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.299 [187/267] Linking static target lib/librte_mbuf.a 00:01:39.299 [188/267] Linking static target lib/librte_hash.a 00:01:39.299 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.299 [190/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.299 [191/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.299 [192/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.299 [193/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:39.299 [194/267] Linking static target drivers/librte_bus_vdev.a 00:01:39.299 [195/267] Linking target lib/librte_kvargs.so.24.1 00:01:39.299 [196/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.299 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.299 [198/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:39.299 [199/267] Linking static target drivers/librte_mempool_ring.a 00:01:39.299 [200/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.299 [201/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.299 [202/267] Linking static target drivers/librte_bus_pci.a 00:01:39.299 [203/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.299 [204/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.299 [205/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:39.299 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.561 [207/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.561 [208/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.561 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:39.561 [210/267] Linking static target lib/librte_cryptodev.a 00:01:39.561 [211/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.561 [212/267] Linking target lib/librte_telemetry.so.24.1 00:01:39.561 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.821 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:39.821 [215/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.821 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.821 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.821 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:39.821 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:39.821 [220/267] Linking static target lib/librte_ethdev.a 00:01:40.081 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.081 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.081 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.342 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.342 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.342 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.602 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:40.602 [228/267] Linking static target lib/librte_vhost.a 00:01:41.986 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.928 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.513 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.901 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.901 [233/267] Linking target lib/librte_eal.so.24.1 00:01:50.901 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:51.161 [235/267] Linking target lib/librte_ring.so.24.1 00:01:51.161 [236/267] Linking target lib/librte_meter.so.24.1 00:01:51.161 [237/267] Linking target lib/librte_pci.so.24.1 00:01:51.161 [238/267] Linking target lib/librte_timer.so.24.1 00:01:51.161 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:51.161 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:51.161 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:51.161 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:51.161 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:51.161 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:51.161 [245/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:51.161 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:51.161 [247/267] Linking target lib/librte_mempool.so.24.1 00:01:51.161 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:51.422 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:51.422 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:51.422 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:51.422 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:51.422 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:51.682 [254/267] Linking target lib/librte_compressdev.so.24.1 00:01:51.682 [255/267] Linking target lib/librte_net.so.24.1 00:01:51.682 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:51.682 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:51.682 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:51.682 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:51.682 [260/267] Linking target lib/librte_cmdline.so.24.1 00:01:51.682 [261/267] Linking target lib/librte_hash.so.24.1 00:01:51.682 [262/267] Linking target lib/librte_security.so.24.1 00:01:51.682 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:51.943 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:51.943 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:51.943 [266/267] Linking target lib/librte_power.so.24.1 00:01:51.943 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:51.943 INFO: autodetecting backend as ninja 00:01:51.943 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:55.240 CC lib/ut_mock/mock.o 00:01:55.240 CC lib/log/log.o 00:01:55.240 CC lib/ut/ut.o 00:01:55.240 CC lib/log/log_flags.o 00:01:55.240 CC lib/log/log_deprecated.o 00:01:55.501 LIB libspdk_log.a 00:01:55.501 LIB libspdk_ut.a 00:01:55.501 LIB libspdk_ut_mock.a 00:01:55.501 SO libspdk_ut.so.2.0 00:01:55.501 SO libspdk_log.so.7.1 00:01:55.501 SO libspdk_ut_mock.so.6.0 00:01:55.501 SYMLINK libspdk_ut.so 00:01:55.761 SYMLINK libspdk_ut_mock.so 00:01:55.761 SYMLINK libspdk_log.so 00:01:56.022 CC lib/dma/dma.o 00:01:56.022 CC lib/util/base64.o 00:01:56.022 CC lib/util/bit_array.o 00:01:56.022 CC lib/util/cpuset.o 00:01:56.022 CC lib/util/crc16.o 00:01:56.022 CXX lib/trace_parser/trace.o 00:01:56.022 CC lib/util/crc32.o 00:01:56.022 CC lib/util/crc32c.o 00:01:56.022 CC lib/ioat/ioat.o 00:01:56.022 CC lib/util/crc32_ieee.o 00:01:56.022 CC lib/util/crc64.o 00:01:56.022 CC lib/util/dif.o 00:01:56.022 CC lib/util/fd.o 00:01:56.022 CC lib/util/fd_group.o 00:01:56.022 CC lib/util/file.o 00:01:56.022 CC lib/util/hexlify.o 00:01:56.022 CC lib/util/iov.o 00:01:56.022 CC lib/util/math.o 00:01:56.022 CC lib/util/net.o 00:01:56.022 CC lib/util/pipe.o 00:01:56.022 CC lib/util/strerror_tls.o 00:01:56.022 CC lib/util/string.o 00:01:56.022 CC lib/util/uuid.o 00:01:56.022 CC lib/util/xor.o 00:01:56.022 CC lib/util/zipf.o 00:01:56.022 CC lib/util/md5.o 00:01:56.284 CC lib/vfio_user/host/vfio_user_pci.o 00:01:56.284 CC lib/vfio_user/host/vfio_user.o 00:01:56.284 LIB libspdk_dma.a 00:01:56.284 SO libspdk_dma.so.5.0 00:01:56.284 LIB libspdk_ioat.a 00:01:56.284 SYMLINK libspdk_dma.so 00:01:56.284 SO libspdk_ioat.so.7.0 00:01:56.284 LIB libspdk_vfio_user.a 00:01:56.284 SYMLINK libspdk_ioat.so 00:01:56.544 SO libspdk_vfio_user.so.5.0 00:01:56.544 LIB libspdk_util.a 00:01:56.544 SYMLINK libspdk_vfio_user.so 00:01:56.544 SO libspdk_util.so.10.1 00:01:56.806 SYMLINK libspdk_util.so 00:01:56.806 LIB libspdk_trace_parser.a 00:01:56.806 SO libspdk_trace_parser.so.6.0 00:01:57.068 SYMLINK libspdk_trace_parser.so 00:01:57.068 CC lib/rdma_utils/rdma_utils.o 00:01:57.068 CC lib/json/json_parse.o 00:01:57.068 CC lib/json/json_util.o 00:01:57.068 CC lib/json/json_write.o 00:01:57.068 CC lib/conf/conf.o 00:01:57.068 CC lib/vmd/vmd.o 00:01:57.068 CC lib/idxd/idxd.o 00:01:57.068 CC lib/vmd/led.o 00:01:57.068 CC lib/idxd/idxd_user.o 00:01:57.068 CC lib/env_dpdk/env.o 00:01:57.068 CC lib/idxd/idxd_kernel.o 00:01:57.068 CC lib/env_dpdk/memory.o 00:01:57.068 CC lib/env_dpdk/pci.o 00:01:57.068 CC lib/env_dpdk/init.o 00:01:57.068 CC lib/env_dpdk/threads.o 00:01:57.068 CC lib/env_dpdk/pci_ioat.o 00:01:57.068 CC lib/env_dpdk/pci_virtio.o 00:01:57.068 CC lib/env_dpdk/pci_vmd.o 00:01:57.068 CC lib/env_dpdk/pci_idxd.o 00:01:57.068 CC lib/env_dpdk/pci_event.o 00:01:57.068 CC lib/env_dpdk/sigbus_handler.o 00:01:57.068 CC lib/env_dpdk/pci_dpdk.o 00:01:57.068 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:57.068 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:57.329 LIB libspdk_conf.a 00:01:57.329 LIB libspdk_rdma_utils.a 00:01:57.329 SO libspdk_conf.so.6.0 00:01:57.329 LIB libspdk_json.a 00:01:57.329 SO libspdk_rdma_utils.so.1.0 00:01:57.329 SO libspdk_json.so.6.0 00:01:57.329 SYMLINK libspdk_conf.so 00:01:57.329 SYMLINK libspdk_rdma_utils.so 00:01:57.590 SYMLINK libspdk_json.so 00:01:57.590 LIB libspdk_idxd.a 00:01:57.590 SO libspdk_idxd.so.12.1 00:01:57.590 LIB libspdk_vmd.a 00:01:57.590 SO libspdk_vmd.so.6.0 00:01:57.852 SYMLINK libspdk_idxd.so 00:01:57.852 SYMLINK libspdk_vmd.so 00:01:57.852 CC lib/rdma_provider/common.o 00:01:57.852 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:57.852 CC lib/jsonrpc/jsonrpc_server.o 00:01:57.852 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:57.852 CC lib/jsonrpc/jsonrpc_client.o 00:01:57.852 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:58.114 LIB libspdk_rdma_provider.a 00:01:58.114 SO libspdk_rdma_provider.so.7.0 00:01:58.114 LIB libspdk_jsonrpc.a 00:01:58.114 SO libspdk_jsonrpc.so.6.0 00:01:58.114 SYMLINK libspdk_rdma_provider.so 00:01:58.114 SYMLINK libspdk_jsonrpc.so 00:01:58.376 LIB libspdk_env_dpdk.a 00:01:58.376 SO libspdk_env_dpdk.so.15.1 00:01:58.638 SYMLINK libspdk_env_dpdk.so 00:01:58.638 CC lib/rpc/rpc.o 00:01:58.899 LIB libspdk_rpc.a 00:01:58.899 SO libspdk_rpc.so.6.0 00:01:58.899 SYMLINK libspdk_rpc.so 00:01:59.160 CC lib/notify/notify.o 00:01:59.160 CC lib/notify/notify_rpc.o 00:01:59.160 CC lib/trace/trace.o 00:01:59.160 CC lib/trace/trace_flags.o 00:01:59.160 CC lib/trace/trace_rpc.o 00:01:59.160 CC lib/keyring/keyring.o 00:01:59.160 CC lib/keyring/keyring_rpc.o 00:01:59.423 LIB libspdk_notify.a 00:01:59.423 SO libspdk_notify.so.6.0 00:01:59.423 LIB libspdk_trace.a 00:01:59.423 LIB libspdk_keyring.a 00:01:59.684 SYMLINK libspdk_notify.so 00:01:59.684 SO libspdk_trace.so.11.0 00:01:59.684 SO libspdk_keyring.so.2.0 00:01:59.684 SYMLINK libspdk_trace.so 00:01:59.684 SYMLINK libspdk_keyring.so 00:01:59.945 CC lib/sock/sock.o 00:01:59.945 CC lib/sock/sock_rpc.o 00:01:59.945 CC lib/thread/thread.o 00:01:59.945 CC lib/thread/iobuf.o 00:02:00.516 LIB libspdk_sock.a 00:02:00.516 SO libspdk_sock.so.10.0 00:02:00.516 SYMLINK libspdk_sock.so 00:02:00.777 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:00.777 CC lib/nvme/nvme_ctrlr.o 00:02:00.777 CC lib/nvme/nvme_fabric.o 00:02:00.777 CC lib/nvme/nvme_ns_cmd.o 00:02:00.777 CC lib/nvme/nvme_ns.o 00:02:00.777 CC lib/nvme/nvme_pcie_common.o 00:02:00.777 CC lib/nvme/nvme_pcie.o 00:02:00.777 CC lib/nvme/nvme_qpair.o 00:02:00.777 CC lib/nvme/nvme.o 00:02:00.777 CC lib/nvme/nvme_quirks.o 00:02:00.777 CC lib/nvme/nvme_transport.o 00:02:00.777 CC lib/nvme/nvme_discovery.o 00:02:00.777 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:00.777 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:00.777 CC lib/nvme/nvme_tcp.o 00:02:00.777 CC lib/nvme/nvme_opal.o 00:02:00.777 CC lib/nvme/nvme_io_msg.o 00:02:00.777 CC lib/nvme/nvme_poll_group.o 00:02:00.777 CC lib/nvme/nvme_zns.o 00:02:00.777 CC lib/nvme/nvme_stubs.o 00:02:00.777 CC lib/nvme/nvme_auth.o 00:02:00.777 CC lib/nvme/nvme_cuse.o 00:02:00.777 CC lib/nvme/nvme_vfio_user.o 00:02:00.777 CC lib/nvme/nvme_rdma.o 00:02:01.349 LIB libspdk_thread.a 00:02:01.349 SO libspdk_thread.so.11.0 00:02:01.349 SYMLINK libspdk_thread.so 00:02:01.922 CC lib/vfu_tgt/tgt_endpoint.o 00:02:01.922 CC lib/vfu_tgt/tgt_rpc.o 00:02:01.922 CC lib/blob/blobstore.o 00:02:01.922 CC lib/virtio/virtio.o 00:02:01.922 CC lib/blob/request.o 00:02:01.922 CC lib/blob/blob_bs_dev.o 00:02:01.922 CC lib/virtio/virtio_vhost_user.o 00:02:01.922 CC lib/blob/zeroes.o 00:02:01.922 CC lib/virtio/virtio_vfio_user.o 00:02:01.922 CC lib/virtio/virtio_pci.o 00:02:01.922 CC lib/fsdev/fsdev.o 00:02:01.922 CC lib/fsdev/fsdev_io.o 00:02:01.922 CC lib/fsdev/fsdev_rpc.o 00:02:01.922 CC lib/init/json_config.o 00:02:01.922 CC lib/accel/accel.o 00:02:01.922 CC lib/init/subsystem.o 00:02:01.922 CC lib/accel/accel_rpc.o 00:02:01.922 CC lib/init/subsystem_rpc.o 00:02:01.923 CC lib/accel/accel_sw.o 00:02:01.923 CC lib/init/rpc.o 00:02:02.269 LIB libspdk_init.a 00:02:02.269 LIB libspdk_vfu_tgt.a 00:02:02.269 SO libspdk_init.so.6.0 00:02:02.269 LIB libspdk_virtio.a 00:02:02.269 SO libspdk_vfu_tgt.so.3.0 00:02:02.269 SO libspdk_virtio.so.7.0 00:02:02.269 SYMLINK libspdk_init.so 00:02:02.269 SYMLINK libspdk_vfu_tgt.so 00:02:02.269 SYMLINK libspdk_virtio.so 00:02:02.269 LIB libspdk_fsdev.a 00:02:02.530 SO libspdk_fsdev.so.2.0 00:02:02.530 SYMLINK libspdk_fsdev.so 00:02:02.530 CC lib/event/app.o 00:02:02.530 CC lib/event/reactor.o 00:02:02.530 CC lib/event/log_rpc.o 00:02:02.530 CC lib/event/app_rpc.o 00:02:02.530 CC lib/event/scheduler_static.o 00:02:02.791 LIB libspdk_accel.a 00:02:02.791 SO libspdk_accel.so.16.0 00:02:02.791 LIB libspdk_nvme.a 00:02:02.791 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:02.791 SYMLINK libspdk_accel.so 00:02:03.052 LIB libspdk_event.a 00:02:03.052 SO libspdk_nvme.so.15.0 00:02:03.052 SO libspdk_event.so.14.0 00:02:03.052 SYMLINK libspdk_event.so 00:02:03.313 SYMLINK libspdk_nvme.so 00:02:03.313 CC lib/bdev/bdev.o 00:02:03.313 CC lib/bdev/bdev_rpc.o 00:02:03.313 CC lib/bdev/bdev_zone.o 00:02:03.313 CC lib/bdev/part.o 00:02:03.313 CC lib/bdev/scsi_nvme.o 00:02:03.574 LIB libspdk_fuse_dispatcher.a 00:02:03.574 SO libspdk_fuse_dispatcher.so.1.0 00:02:03.574 SYMLINK libspdk_fuse_dispatcher.so 00:02:04.517 LIB libspdk_blob.a 00:02:04.517 SO libspdk_blob.so.11.0 00:02:04.517 SYMLINK libspdk_blob.so 00:02:05.087 CC lib/blobfs/blobfs.o 00:02:05.087 CC lib/blobfs/tree.o 00:02:05.087 CC lib/lvol/lvol.o 00:02:05.658 LIB libspdk_bdev.a 00:02:05.658 SO libspdk_bdev.so.17.0 00:02:05.658 LIB libspdk_blobfs.a 00:02:05.658 SYMLINK libspdk_bdev.so 00:02:05.658 SO libspdk_blobfs.so.10.0 00:02:05.658 LIB libspdk_lvol.a 00:02:05.658 SYMLINK libspdk_blobfs.so 00:02:05.658 SO libspdk_lvol.so.10.0 00:02:05.919 SYMLINK libspdk_lvol.so 00:02:06.178 CC lib/ftl/ftl_core.o 00:02:06.178 CC lib/nvmf/ctrlr.o 00:02:06.178 CC lib/nbd/nbd.o 00:02:06.178 CC lib/ftl/ftl_init.o 00:02:06.178 CC lib/nbd/nbd_rpc.o 00:02:06.178 CC lib/nvmf/ctrlr_discovery.o 00:02:06.178 CC lib/ublk/ublk.o 00:02:06.178 CC lib/scsi/dev.o 00:02:06.178 CC lib/ftl/ftl_layout.o 00:02:06.178 CC lib/nvmf/ctrlr_bdev.o 00:02:06.178 CC lib/scsi/lun.o 00:02:06.178 CC lib/ublk/ublk_rpc.o 00:02:06.178 CC lib/ftl/ftl_debug.o 00:02:06.178 CC lib/nvmf/subsystem.o 00:02:06.178 CC lib/scsi/port.o 00:02:06.178 CC lib/ftl/ftl_io.o 00:02:06.178 CC lib/nvmf/nvmf.o 00:02:06.178 CC lib/scsi/scsi.o 00:02:06.178 CC lib/ftl/ftl_sb.o 00:02:06.178 CC lib/nvmf/nvmf_rpc.o 00:02:06.178 CC lib/scsi/scsi_bdev.o 00:02:06.178 CC lib/ftl/ftl_l2p.o 00:02:06.178 CC lib/nvmf/transport.o 00:02:06.178 CC lib/ftl/ftl_l2p_flat.o 00:02:06.178 CC lib/scsi/scsi_pr.o 00:02:06.178 CC lib/scsi/scsi_rpc.o 00:02:06.178 CC lib/nvmf/tcp.o 00:02:06.178 CC lib/ftl/ftl_nv_cache.o 00:02:06.178 CC lib/scsi/task.o 00:02:06.178 CC lib/ftl/ftl_band.o 00:02:06.178 CC lib/nvmf/stubs.o 00:02:06.178 CC lib/nvmf/mdns_server.o 00:02:06.178 CC lib/ftl/ftl_band_ops.o 00:02:06.178 CC lib/nvmf/vfio_user.o 00:02:06.178 CC lib/ftl/ftl_writer.o 00:02:06.178 CC lib/ftl/ftl_rq.o 00:02:06.178 CC lib/nvmf/rdma.o 00:02:06.178 CC lib/ftl/ftl_reloc.o 00:02:06.178 CC lib/nvmf/auth.o 00:02:06.178 CC lib/ftl/ftl_l2p_cache.o 00:02:06.178 CC lib/ftl/ftl_p2l.o 00:02:06.178 CC lib/ftl/ftl_p2l_log.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:06.178 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:06.178 CC lib/ftl/utils/ftl_conf.o 00:02:06.178 CC lib/ftl/utils/ftl_mempool.o 00:02:06.178 CC lib/ftl/utils/ftl_md.o 00:02:06.178 CC lib/ftl/utils/ftl_property.o 00:02:06.178 CC lib/ftl/utils/ftl_bitmap.o 00:02:06.178 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:06.178 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:06.178 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:06.178 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:06.178 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:06.178 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:06.178 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:06.178 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:06.178 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:06.178 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:06.178 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:06.178 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:06.178 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:06.178 CC lib/ftl/base/ftl_base_dev.o 00:02:06.178 CC lib/ftl/base/ftl_base_bdev.o 00:02:06.178 CC lib/ftl/ftl_trace.o 00:02:06.748 LIB libspdk_nbd.a 00:02:06.749 SO libspdk_nbd.so.7.0 00:02:06.749 LIB libspdk_scsi.a 00:02:06.749 SYMLINK libspdk_nbd.so 00:02:06.749 SO libspdk_scsi.so.9.0 00:02:06.749 LIB libspdk_ublk.a 00:02:06.749 SYMLINK libspdk_scsi.so 00:02:06.749 SO libspdk_ublk.so.3.0 00:02:07.010 SYMLINK libspdk_ublk.so 00:02:07.010 LIB libspdk_ftl.a 00:02:07.272 CC lib/iscsi/conn.o 00:02:07.272 CC lib/iscsi/init_grp.o 00:02:07.272 CC lib/iscsi/iscsi.o 00:02:07.272 CC lib/iscsi/param.o 00:02:07.272 CC lib/vhost/vhost.o 00:02:07.272 CC lib/iscsi/portal_grp.o 00:02:07.272 CC lib/vhost/vhost_rpc.o 00:02:07.272 CC lib/iscsi/tgt_node.o 00:02:07.272 CC lib/vhost/vhost_scsi.o 00:02:07.272 CC lib/iscsi/iscsi_subsystem.o 00:02:07.272 CC lib/vhost/vhost_blk.o 00:02:07.272 CC lib/iscsi/iscsi_rpc.o 00:02:07.272 CC lib/vhost/rte_vhost_user.o 00:02:07.272 CC lib/iscsi/task.o 00:02:07.272 SO libspdk_ftl.so.9.0 00:02:07.533 SYMLINK libspdk_ftl.so 00:02:08.105 LIB libspdk_nvmf.a 00:02:08.105 SO libspdk_nvmf.so.20.0 00:02:08.105 LIB libspdk_vhost.a 00:02:08.365 SO libspdk_vhost.so.8.0 00:02:08.365 SYMLINK libspdk_nvmf.so 00:02:08.365 SYMLINK libspdk_vhost.so 00:02:08.365 LIB libspdk_iscsi.a 00:02:08.365 SO libspdk_iscsi.so.8.0 00:02:08.626 SYMLINK libspdk_iscsi.so 00:02:09.196 CC module/vfu_device/vfu_virtio.o 00:02:09.196 CC module/env_dpdk/env_dpdk_rpc.o 00:02:09.196 CC module/vfu_device/vfu_virtio_blk.o 00:02:09.196 CC module/vfu_device/vfu_virtio_scsi.o 00:02:09.196 CC module/vfu_device/vfu_virtio_rpc.o 00:02:09.196 CC module/vfu_device/vfu_virtio_fs.o 00:02:09.457 CC module/sock/posix/posix.o 00:02:09.457 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:09.457 CC module/blob/bdev/blob_bdev.o 00:02:09.457 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:09.457 LIB libspdk_env_dpdk_rpc.a 00:02:09.457 CC module/scheduler/gscheduler/gscheduler.o 00:02:09.457 CC module/keyring/linux/keyring.o 00:02:09.457 CC module/keyring/linux/keyring_rpc.o 00:02:09.457 CC module/accel/ioat/accel_ioat.o 00:02:09.458 CC module/fsdev/aio/fsdev_aio.o 00:02:09.458 CC module/accel/ioat/accel_ioat_rpc.o 00:02:09.458 CC module/accel/error/accel_error_rpc.o 00:02:09.458 CC module/accel/error/accel_error.o 00:02:09.458 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:09.458 CC module/fsdev/aio/linux_aio_mgr.o 00:02:09.458 CC module/accel/iaa/accel_iaa.o 00:02:09.458 CC module/keyring/file/keyring.o 00:02:09.458 CC module/accel/iaa/accel_iaa_rpc.o 00:02:09.458 CC module/keyring/file/keyring_rpc.o 00:02:09.458 CC module/accel/dsa/accel_dsa.o 00:02:09.458 CC module/accel/dsa/accel_dsa_rpc.o 00:02:09.458 SO libspdk_env_dpdk_rpc.so.6.0 00:02:09.458 SYMLINK libspdk_env_dpdk_rpc.so 00:02:09.719 LIB libspdk_scheduler_dpdk_governor.a 00:02:09.719 LIB libspdk_keyring_linux.a 00:02:09.719 LIB libspdk_scheduler_gscheduler.a 00:02:09.719 LIB libspdk_keyring_file.a 00:02:09.719 LIB libspdk_scheduler_dynamic.a 00:02:09.719 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:09.719 SO libspdk_scheduler_gscheduler.so.4.0 00:02:09.719 SO libspdk_keyring_linux.so.1.0 00:02:09.719 LIB libspdk_accel_iaa.a 00:02:09.719 LIB libspdk_accel_error.a 00:02:09.719 LIB libspdk_accel_ioat.a 00:02:09.719 SO libspdk_keyring_file.so.2.0 00:02:09.719 SO libspdk_scheduler_dynamic.so.4.0 00:02:09.719 LIB libspdk_blob_bdev.a 00:02:09.719 SO libspdk_accel_error.so.2.0 00:02:09.719 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:09.719 SO libspdk_accel_iaa.so.3.0 00:02:09.719 SO libspdk_accel_ioat.so.6.0 00:02:09.719 SYMLINK libspdk_scheduler_gscheduler.so 00:02:09.719 SO libspdk_blob_bdev.so.11.0 00:02:09.719 SYMLINK libspdk_keyring_linux.so 00:02:09.719 LIB libspdk_accel_dsa.a 00:02:09.719 SYMLINK libspdk_keyring_file.so 00:02:09.719 SYMLINK libspdk_scheduler_dynamic.so 00:02:09.719 SYMLINK libspdk_accel_error.so 00:02:09.719 SYMLINK libspdk_accel_iaa.so 00:02:09.719 SYMLINK libspdk_accel_ioat.so 00:02:09.719 SO libspdk_accel_dsa.so.5.0 00:02:09.719 SYMLINK libspdk_blob_bdev.so 00:02:09.719 LIB libspdk_vfu_device.a 00:02:09.980 SYMLINK libspdk_accel_dsa.so 00:02:09.980 SO libspdk_vfu_device.so.3.0 00:02:09.980 SYMLINK libspdk_vfu_device.so 00:02:09.980 LIB libspdk_fsdev_aio.a 00:02:09.980 LIB libspdk_sock_posix.a 00:02:10.240 SO libspdk_fsdev_aio.so.1.0 00:02:10.241 SO libspdk_sock_posix.so.6.0 00:02:10.241 SYMLINK libspdk_fsdev_aio.so 00:02:10.241 SYMLINK libspdk_sock_posix.so 00:02:10.241 CC module/bdev/delay/vbdev_delay.o 00:02:10.241 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:10.241 CC module/bdev/lvol/vbdev_lvol.o 00:02:10.241 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:10.241 CC module/bdev/error/vbdev_error.o 00:02:10.241 CC module/bdev/error/vbdev_error_rpc.o 00:02:10.241 CC module/bdev/gpt/gpt.o 00:02:10.241 CC module/bdev/gpt/vbdev_gpt.o 00:02:10.241 CC module/bdev/ftl/bdev_ftl.o 00:02:10.241 CC module/blobfs/bdev/blobfs_bdev.o 00:02:10.241 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:10.241 CC module/bdev/malloc/bdev_malloc.o 00:02:10.241 CC module/bdev/nvme/bdev_nvme.o 00:02:10.241 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:10.241 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:10.241 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:10.241 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:10.241 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:10.241 CC module/bdev/null/bdev_null.o 00:02:10.241 CC module/bdev/raid/bdev_raid.o 00:02:10.241 CC module/bdev/nvme/nvme_rpc.o 00:02:10.241 CC module/bdev/passthru/vbdev_passthru.o 00:02:10.241 CC module/bdev/null/bdev_null_rpc.o 00:02:10.241 CC module/bdev/nvme/bdev_mdns_client.o 00:02:10.241 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:10.241 CC module/bdev/aio/bdev_aio.o 00:02:10.241 CC module/bdev/raid/bdev_raid_rpc.o 00:02:10.241 CC module/bdev/raid/bdev_raid_sb.o 00:02:10.241 CC module/bdev/nvme/vbdev_opal.o 00:02:10.241 CC module/bdev/aio/bdev_aio_rpc.o 00:02:10.241 CC module/bdev/raid/raid0.o 00:02:10.241 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:10.241 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:10.502 CC module/bdev/iscsi/bdev_iscsi.o 00:02:10.502 CC module/bdev/raid/raid1.o 00:02:10.502 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:10.502 CC module/bdev/raid/concat.o 00:02:10.502 CC module/bdev/split/vbdev_split.o 00:02:10.502 CC module/bdev/split/vbdev_split_rpc.o 00:02:10.502 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:10.502 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:10.502 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:10.762 LIB libspdk_blobfs_bdev.a 00:02:10.762 SO libspdk_blobfs_bdev.so.6.0 00:02:10.762 LIB libspdk_bdev_error.a 00:02:10.762 LIB libspdk_bdev_split.a 00:02:10.762 SO libspdk_bdev_error.so.6.0 00:02:10.762 SYMLINK libspdk_blobfs_bdev.so 00:02:10.762 LIB libspdk_bdev_gpt.a 00:02:10.762 SO libspdk_bdev_split.so.6.0 00:02:10.762 LIB libspdk_bdev_null.a 00:02:10.762 LIB libspdk_bdev_passthru.a 00:02:10.762 LIB libspdk_bdev_ftl.a 00:02:10.762 SO libspdk_bdev_gpt.so.6.0 00:02:10.762 SYMLINK libspdk_bdev_error.so 00:02:10.762 LIB libspdk_bdev_delay.a 00:02:10.762 SO libspdk_bdev_null.so.6.0 00:02:10.762 SO libspdk_bdev_passthru.so.6.0 00:02:10.762 SO libspdk_bdev_ftl.so.6.0 00:02:10.762 LIB libspdk_bdev_aio.a 00:02:10.762 LIB libspdk_bdev_malloc.a 00:02:10.762 LIB libspdk_bdev_zone_block.a 00:02:10.762 SYMLINK libspdk_bdev_split.so 00:02:10.762 SO libspdk_bdev_delay.so.6.0 00:02:10.762 SYMLINK libspdk_bdev_gpt.so 00:02:10.762 LIB libspdk_bdev_iscsi.a 00:02:10.762 SO libspdk_bdev_aio.so.6.0 00:02:10.762 SO libspdk_bdev_malloc.so.6.0 00:02:10.762 SO libspdk_bdev_zone_block.so.6.0 00:02:10.762 SYMLINK libspdk_bdev_passthru.so 00:02:10.762 SYMLINK libspdk_bdev_null.so 00:02:11.023 SYMLINK libspdk_bdev_ftl.so 00:02:11.023 SO libspdk_bdev_iscsi.so.6.0 00:02:11.023 SYMLINK libspdk_bdev_delay.so 00:02:11.023 SYMLINK libspdk_bdev_aio.so 00:02:11.023 SYMLINK libspdk_bdev_malloc.so 00:02:11.023 SYMLINK libspdk_bdev_zone_block.so 00:02:11.023 LIB libspdk_bdev_lvol.a 00:02:11.023 LIB libspdk_bdev_virtio.a 00:02:11.023 SYMLINK libspdk_bdev_iscsi.so 00:02:11.023 SO libspdk_bdev_lvol.so.6.0 00:02:11.023 SO libspdk_bdev_virtio.so.6.0 00:02:11.023 SYMLINK libspdk_bdev_lvol.so 00:02:11.023 SYMLINK libspdk_bdev_virtio.so 00:02:11.283 LIB libspdk_bdev_raid.a 00:02:11.544 SO libspdk_bdev_raid.so.6.0 00:02:11.544 SYMLINK libspdk_bdev_raid.so 00:02:12.929 LIB libspdk_bdev_nvme.a 00:02:12.929 SO libspdk_bdev_nvme.so.7.1 00:02:12.929 SYMLINK libspdk_bdev_nvme.so 00:02:13.500 CC module/event/subsystems/iobuf/iobuf.o 00:02:13.500 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:13.500 CC module/event/subsystems/keyring/keyring.o 00:02:13.500 CC module/event/subsystems/vmd/vmd.o 00:02:13.500 CC module/event/subsystems/sock/sock.o 00:02:13.500 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:13.500 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:13.500 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:13.500 CC module/event/subsystems/scheduler/scheduler.o 00:02:13.500 CC module/event/subsystems/fsdev/fsdev.o 00:02:13.761 LIB libspdk_event_vhost_blk.a 00:02:13.761 LIB libspdk_event_keyring.a 00:02:13.761 LIB libspdk_event_iobuf.a 00:02:13.761 LIB libspdk_event_vmd.a 00:02:13.761 LIB libspdk_event_sock.a 00:02:13.761 LIB libspdk_event_vfu_tgt.a 00:02:13.761 LIB libspdk_event_fsdev.a 00:02:13.761 LIB libspdk_event_scheduler.a 00:02:13.761 SO libspdk_event_vhost_blk.so.3.0 00:02:13.761 SO libspdk_event_keyring.so.1.0 00:02:13.761 SO libspdk_event_iobuf.so.3.0 00:02:13.761 SO libspdk_event_sock.so.5.0 00:02:13.762 SO libspdk_event_vmd.so.6.0 00:02:13.762 SO libspdk_event_vfu_tgt.so.3.0 00:02:13.762 SO libspdk_event_fsdev.so.1.0 00:02:13.762 SO libspdk_event_scheduler.so.4.0 00:02:14.032 SYMLINK libspdk_event_vhost_blk.so 00:02:14.032 SYMLINK libspdk_event_keyring.so 00:02:14.032 SYMLINK libspdk_event_iobuf.so 00:02:14.032 SYMLINK libspdk_event_vfu_tgt.so 00:02:14.032 SYMLINK libspdk_event_sock.so 00:02:14.032 SYMLINK libspdk_event_fsdev.so 00:02:14.032 SYMLINK libspdk_event_vmd.so 00:02:14.032 SYMLINK libspdk_event_scheduler.so 00:02:14.292 CC module/event/subsystems/accel/accel.o 00:02:14.552 LIB libspdk_event_accel.a 00:02:14.552 SO libspdk_event_accel.so.6.0 00:02:14.552 SYMLINK libspdk_event_accel.so 00:02:14.813 CC module/event/subsystems/bdev/bdev.o 00:02:15.074 LIB libspdk_event_bdev.a 00:02:15.074 SO libspdk_event_bdev.so.6.0 00:02:15.074 SYMLINK libspdk_event_bdev.so 00:02:15.646 CC module/event/subsystems/scsi/scsi.o 00:02:15.646 CC module/event/subsystems/nbd/nbd.o 00:02:15.646 CC module/event/subsystems/ublk/ublk.o 00:02:15.646 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:15.646 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:15.646 LIB libspdk_event_nbd.a 00:02:15.646 LIB libspdk_event_ublk.a 00:02:15.646 LIB libspdk_event_scsi.a 00:02:15.646 SO libspdk_event_nbd.so.6.0 00:02:15.646 SO libspdk_event_ublk.so.3.0 00:02:15.646 SO libspdk_event_scsi.so.6.0 00:02:15.908 LIB libspdk_event_nvmf.a 00:02:15.908 SYMLINK libspdk_event_nbd.so 00:02:15.908 SYMLINK libspdk_event_ublk.so 00:02:15.908 SYMLINK libspdk_event_scsi.so 00:02:15.908 SO libspdk_event_nvmf.so.6.0 00:02:15.908 SYMLINK libspdk_event_nvmf.so 00:02:16.168 CC module/event/subsystems/iscsi/iscsi.o 00:02:16.168 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:16.430 LIB libspdk_event_iscsi.a 00:02:16.430 LIB libspdk_event_vhost_scsi.a 00:02:16.430 SO libspdk_event_vhost_scsi.so.3.0 00:02:16.430 SO libspdk_event_iscsi.so.6.0 00:02:16.430 SYMLINK libspdk_event_vhost_scsi.so 00:02:16.430 SYMLINK libspdk_event_iscsi.so 00:02:16.705 SO libspdk.so.6.0 00:02:16.705 SYMLINK libspdk.so 00:02:17.036 CXX app/trace/trace.o 00:02:17.036 CC app/trace_record/trace_record.o 00:02:17.036 CC app/spdk_top/spdk_top.o 00:02:17.036 CC app/spdk_lspci/spdk_lspci.o 00:02:17.036 CC app/spdk_nvme_perf/perf.o 00:02:17.036 CC test/rpc_client/rpc_client_test.o 00:02:17.036 CC app/spdk_nvme_identify/identify.o 00:02:17.036 CC app/spdk_nvme_discover/discovery_aer.o 00:02:17.036 TEST_HEADER include/spdk/accel.h 00:02:17.036 TEST_HEADER include/spdk/accel_module.h 00:02:17.036 TEST_HEADER include/spdk/assert.h 00:02:17.036 TEST_HEADER include/spdk/barrier.h 00:02:17.036 TEST_HEADER include/spdk/base64.h 00:02:17.036 TEST_HEADER include/spdk/bdev.h 00:02:17.036 TEST_HEADER include/spdk/bdev_module.h 00:02:17.036 TEST_HEADER include/spdk/bdev_zone.h 00:02:17.036 TEST_HEADER include/spdk/bit_array.h 00:02:17.036 TEST_HEADER include/spdk/bit_pool.h 00:02:17.036 TEST_HEADER include/spdk/blob_bdev.h 00:02:17.036 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:17.036 TEST_HEADER include/spdk/blobfs.h 00:02:17.036 TEST_HEADER include/spdk/blob.h 00:02:17.036 TEST_HEADER include/spdk/config.h 00:02:17.036 TEST_HEADER include/spdk/conf.h 00:02:17.036 CC app/nvmf_tgt/nvmf_main.o 00:02:17.036 TEST_HEADER include/spdk/cpuset.h 00:02:17.036 TEST_HEADER include/spdk/crc16.h 00:02:17.036 TEST_HEADER include/spdk/crc32.h 00:02:17.036 TEST_HEADER include/spdk/crc64.h 00:02:17.036 TEST_HEADER include/spdk/dif.h 00:02:17.036 TEST_HEADER include/spdk/dma.h 00:02:17.036 TEST_HEADER include/spdk/endian.h 00:02:17.036 TEST_HEADER include/spdk/env_dpdk.h 00:02:17.036 CC app/spdk_dd/spdk_dd.o 00:02:17.036 TEST_HEADER include/spdk/env.h 00:02:17.036 TEST_HEADER include/spdk/fd_group.h 00:02:17.036 TEST_HEADER include/spdk/event.h 00:02:17.036 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.036 TEST_HEADER include/spdk/fd.h 00:02:17.036 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:17.036 TEST_HEADER include/spdk/file.h 00:02:17.036 TEST_HEADER include/spdk/fsdev.h 00:02:17.036 TEST_HEADER include/spdk/fsdev_module.h 00:02:17.036 TEST_HEADER include/spdk/ftl.h 00:02:17.036 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:17.036 TEST_HEADER include/spdk/hexlify.h 00:02:17.036 TEST_HEADER include/spdk/gpt_spec.h 00:02:17.036 TEST_HEADER include/spdk/histogram_data.h 00:02:17.036 TEST_HEADER include/spdk/idxd_spec.h 00:02:17.036 TEST_HEADER include/spdk/idxd.h 00:02:17.036 TEST_HEADER include/spdk/init.h 00:02:17.036 TEST_HEADER include/spdk/ioat.h 00:02:17.036 TEST_HEADER include/spdk/ioat_spec.h 00:02:17.036 TEST_HEADER include/spdk/iscsi_spec.h 00:02:17.036 TEST_HEADER include/spdk/json.h 00:02:17.323 TEST_HEADER include/spdk/jsonrpc.h 00:02:17.323 TEST_HEADER include/spdk/keyring.h 00:02:17.323 CC app/spdk_tgt/spdk_tgt.o 00:02:17.323 TEST_HEADER include/spdk/keyring_module.h 00:02:17.323 TEST_HEADER include/spdk/log.h 00:02:17.323 TEST_HEADER include/spdk/likely.h 00:02:17.323 TEST_HEADER include/spdk/lvol.h 00:02:17.323 TEST_HEADER include/spdk/md5.h 00:02:17.323 TEST_HEADER include/spdk/memory.h 00:02:17.323 TEST_HEADER include/spdk/mmio.h 00:02:17.323 TEST_HEADER include/spdk/nbd.h 00:02:17.323 TEST_HEADER include/spdk/net.h 00:02:17.323 TEST_HEADER include/spdk/nvme.h 00:02:17.323 TEST_HEADER include/spdk/notify.h 00:02:17.323 TEST_HEADER include/spdk/nvme_intel.h 00:02:17.323 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:17.323 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:17.323 TEST_HEADER include/spdk/nvme_spec.h 00:02:17.323 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:17.323 TEST_HEADER include/spdk/nvme_zns.h 00:02:17.323 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:17.323 TEST_HEADER include/spdk/nvmf.h 00:02:17.323 TEST_HEADER include/spdk/nvmf_spec.h 00:02:17.323 TEST_HEADER include/spdk/nvmf_transport.h 00:02:17.323 TEST_HEADER include/spdk/opal.h 00:02:17.323 TEST_HEADER include/spdk/opal_spec.h 00:02:17.323 TEST_HEADER include/spdk/pci_ids.h 00:02:17.323 TEST_HEADER include/spdk/pipe.h 00:02:17.323 TEST_HEADER include/spdk/queue.h 00:02:17.323 TEST_HEADER include/spdk/reduce.h 00:02:17.323 TEST_HEADER include/spdk/rpc.h 00:02:17.323 TEST_HEADER include/spdk/scheduler.h 00:02:17.323 TEST_HEADER include/spdk/scsi.h 00:02:17.323 TEST_HEADER include/spdk/scsi_spec.h 00:02:17.323 TEST_HEADER include/spdk/sock.h 00:02:17.323 TEST_HEADER include/spdk/stdinc.h 00:02:17.323 TEST_HEADER include/spdk/thread.h 00:02:17.323 TEST_HEADER include/spdk/string.h 00:02:17.323 TEST_HEADER include/spdk/trace.h 00:02:17.323 TEST_HEADER include/spdk/trace_parser.h 00:02:17.323 TEST_HEADER include/spdk/tree.h 00:02:17.323 TEST_HEADER include/spdk/ublk.h 00:02:17.323 TEST_HEADER include/spdk/uuid.h 00:02:17.323 TEST_HEADER include/spdk/util.h 00:02:17.323 TEST_HEADER include/spdk/version.h 00:02:17.323 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:17.323 TEST_HEADER include/spdk/vhost.h 00:02:17.323 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:17.323 TEST_HEADER include/spdk/vmd.h 00:02:17.323 TEST_HEADER include/spdk/zipf.h 00:02:17.323 TEST_HEADER include/spdk/xor.h 00:02:17.323 CXX test/cpp_headers/accel.o 00:02:17.323 CXX test/cpp_headers/accel_module.o 00:02:17.323 CXX test/cpp_headers/assert.o 00:02:17.323 CXX test/cpp_headers/barrier.o 00:02:17.323 CXX test/cpp_headers/base64.o 00:02:17.323 CXX test/cpp_headers/bdev.o 00:02:17.323 CXX test/cpp_headers/bdev_module.o 00:02:17.323 CXX test/cpp_headers/bdev_zone.o 00:02:17.323 CXX test/cpp_headers/bit_array.o 00:02:17.323 CXX test/cpp_headers/bit_pool.o 00:02:17.323 CXX test/cpp_headers/blob_bdev.o 00:02:17.323 CXX test/cpp_headers/blobfs.o 00:02:17.323 CXX test/cpp_headers/blobfs_bdev.o 00:02:17.323 CXX test/cpp_headers/conf.o 00:02:17.323 CXX test/cpp_headers/blob.o 00:02:17.323 CXX test/cpp_headers/config.o 00:02:17.323 CXX test/cpp_headers/cpuset.o 00:02:17.323 CXX test/cpp_headers/crc16.o 00:02:17.323 CXX test/cpp_headers/crc32.o 00:02:17.323 CXX test/cpp_headers/crc64.o 00:02:17.323 CXX test/cpp_headers/dif.o 00:02:17.323 CXX test/cpp_headers/dma.o 00:02:17.323 CXX test/cpp_headers/endian.o 00:02:17.323 CXX test/cpp_headers/event.o 00:02:17.323 CXX test/cpp_headers/env_dpdk.o 00:02:17.323 CXX test/cpp_headers/env.o 00:02:17.323 CXX test/cpp_headers/fd_group.o 00:02:17.323 CXX test/cpp_headers/fd.o 00:02:17.323 CXX test/cpp_headers/file.o 00:02:17.323 CXX test/cpp_headers/fsdev.o 00:02:17.323 CXX test/cpp_headers/fsdev_module.o 00:02:17.323 CXX test/cpp_headers/ftl.o 00:02:17.323 CXX test/cpp_headers/gpt_spec.o 00:02:17.323 CXX test/cpp_headers/fuse_dispatcher.o 00:02:17.323 CXX test/cpp_headers/hexlify.o 00:02:17.323 CXX test/cpp_headers/histogram_data.o 00:02:17.323 CXX test/cpp_headers/init.o 00:02:17.323 CXX test/cpp_headers/idxd_spec.o 00:02:17.323 CXX test/cpp_headers/ioat.o 00:02:17.323 CXX test/cpp_headers/idxd.o 00:02:17.323 CXX test/cpp_headers/ioat_spec.o 00:02:17.323 CXX test/cpp_headers/iscsi_spec.o 00:02:17.323 CXX test/cpp_headers/jsonrpc.o 00:02:17.323 CXX test/cpp_headers/json.o 00:02:17.323 CXX test/cpp_headers/keyring.o 00:02:17.323 CXX test/cpp_headers/likely.o 00:02:17.323 CXX test/cpp_headers/keyring_module.o 00:02:17.323 CXX test/cpp_headers/lvol.o 00:02:17.323 CXX test/cpp_headers/mmio.o 00:02:17.323 CXX test/cpp_headers/md5.o 00:02:17.323 CXX test/cpp_headers/memory.o 00:02:17.323 CXX test/cpp_headers/log.o 00:02:17.323 CXX test/cpp_headers/nbd.o 00:02:17.323 CXX test/cpp_headers/notify.o 00:02:17.323 CXX test/cpp_headers/nvme.o 00:02:17.323 CXX test/cpp_headers/nvme_intel.o 00:02:17.323 CXX test/cpp_headers/net.o 00:02:17.323 CC app/fio/nvme/fio_plugin.o 00:02:17.323 CXX test/cpp_headers/nvme_zns.o 00:02:17.323 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:17.323 CXX test/cpp_headers/nvme_ocssd.o 00:02:17.323 CXX test/cpp_headers/nvme_spec.o 00:02:17.323 CXX test/cpp_headers/nvmf_cmd.o 00:02:17.323 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:17.323 CXX test/cpp_headers/nvmf.o 00:02:17.323 CXX test/cpp_headers/nvmf_spec.o 00:02:17.323 CXX test/cpp_headers/nvmf_transport.o 00:02:17.323 CC examples/ioat/verify/verify.o 00:02:17.323 CXX test/cpp_headers/opal.o 00:02:17.323 CXX test/cpp_headers/reduce.o 00:02:17.323 CXX test/cpp_headers/opal_spec.o 00:02:17.323 CXX test/cpp_headers/pci_ids.o 00:02:17.323 CC test/env/pci/pci_ut.o 00:02:17.323 CXX test/cpp_headers/pipe.o 00:02:17.323 CC examples/util/zipf/zipf.o 00:02:17.323 CXX test/cpp_headers/queue.o 00:02:17.323 CXX test/cpp_headers/rpc.o 00:02:17.323 CXX test/cpp_headers/scheduler.o 00:02:17.323 CXX test/cpp_headers/sock.o 00:02:17.323 CC examples/ioat/perf/perf.o 00:02:17.323 CXX test/cpp_headers/scsi.o 00:02:17.323 CXX test/cpp_headers/scsi_spec.o 00:02:17.323 CXX test/cpp_headers/stdinc.o 00:02:17.323 LINK spdk_lspci 00:02:17.323 CXX test/cpp_headers/string.o 00:02:17.323 CXX test/cpp_headers/thread.o 00:02:17.323 CXX test/cpp_headers/trace_parser.o 00:02:17.324 CXX test/cpp_headers/trace.o 00:02:17.324 CC test/thread/poller_perf/poller_perf.o 00:02:17.324 CXX test/cpp_headers/tree.o 00:02:17.324 CXX test/cpp_headers/ublk.o 00:02:17.324 CXX test/cpp_headers/vfio_user_pci.o 00:02:17.324 CC test/env/vtophys/vtophys.o 00:02:17.324 CC test/env/memory/memory_ut.o 00:02:17.324 CXX test/cpp_headers/util.o 00:02:17.324 CXX test/cpp_headers/uuid.o 00:02:17.324 CXX test/cpp_headers/version.o 00:02:17.324 CXX test/cpp_headers/vhost.o 00:02:17.324 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.324 CXX test/cpp_headers/vfio_user_spec.o 00:02:17.324 CXX test/cpp_headers/vmd.o 00:02:17.324 CXX test/cpp_headers/xor.o 00:02:17.324 CXX test/cpp_headers/zipf.o 00:02:17.324 CC test/app/stub/stub.o 00:02:17.324 CC test/app/histogram_perf/histogram_perf.o 00:02:17.324 CC test/app/jsoncat/jsoncat.o 00:02:17.324 CC test/dma/test_dma/test_dma.o 00:02:17.627 LINK rpc_client_test 00:02:17.627 CC test/app/bdev_svc/bdev_svc.o 00:02:17.627 CC app/fio/bdev/fio_plugin.o 00:02:17.627 LINK spdk_nvme_discover 00:02:17.627 LINK nvmf_tgt 00:02:17.627 LINK spdk_trace_record 00:02:17.627 LINK iscsi_tgt 00:02:17.627 LINK interrupt_tgt 00:02:17.929 LINK spdk_tgt 00:02:18.193 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:18.193 CC test/env/mem_callbacks/mem_callbacks.o 00:02:18.193 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:18.193 LINK spdk_dd 00:02:18.193 LINK stub 00:02:18.193 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:18.193 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:18.454 LINK poller_perf 00:02:18.454 LINK vtophys 00:02:18.454 LINK zipf 00:02:18.454 LINK bdev_svc 00:02:18.454 LINK jsoncat 00:02:18.454 LINK histogram_perf 00:02:18.454 LINK env_dpdk_post_init 00:02:18.454 LINK spdk_trace 00:02:18.454 LINK verify 00:02:18.454 LINK ioat_perf 00:02:18.715 LINK pci_ut 00:02:18.715 LINK nvme_fuzz 00:02:18.715 LINK vhost_fuzz 00:02:18.715 LINK spdk_nvme_perf 00:02:18.715 LINK spdk_nvme 00:02:18.976 LINK spdk_bdev 00:02:18.976 CC app/vhost/vhost.o 00:02:18.976 LINK test_dma 00:02:18.976 LINK spdk_top 00:02:18.976 LINK mem_callbacks 00:02:18.976 CC test/event/reactor/reactor.o 00:02:18.976 LINK spdk_nvme_identify 00:02:18.976 CC examples/idxd/perf/perf.o 00:02:18.976 CC examples/vmd/lsvmd/lsvmd.o 00:02:18.976 CC test/event/reactor_perf/reactor_perf.o 00:02:18.976 CC examples/vmd/led/led.o 00:02:18.976 CC test/event/event_perf/event_perf.o 00:02:18.976 CC examples/sock/hello_world/hello_sock.o 00:02:18.976 CC test/event/app_repeat/app_repeat.o 00:02:18.976 CC test/event/scheduler/scheduler.o 00:02:18.976 CC examples/thread/thread/thread_ex.o 00:02:19.238 LINK vhost 00:02:19.238 LINK reactor 00:02:19.238 LINK lsvmd 00:02:19.238 LINK event_perf 00:02:19.238 LINK reactor_perf 00:02:19.238 LINK led 00:02:19.238 LINK app_repeat 00:02:19.238 LINK hello_sock 00:02:19.238 LINK scheduler 00:02:19.238 LINK thread 00:02:19.238 LINK idxd_perf 00:02:19.499 LINK memory_ut 00:02:19.499 CC test/nvme/e2edp/nvme_dp.o 00:02:19.499 CC test/nvme/sgl/sgl.o 00:02:19.499 CC test/nvme/aer/aer.o 00:02:19.499 CC test/nvme/cuse/cuse.o 00:02:19.499 CC test/nvme/overhead/overhead.o 00:02:19.499 CC test/nvme/simple_copy/simple_copy.o 00:02:19.499 CC test/nvme/err_injection/err_injection.o 00:02:19.499 CC test/nvme/compliance/nvme_compliance.o 00:02:19.499 CC test/nvme/reset/reset.o 00:02:19.499 CC test/nvme/reserve/reserve.o 00:02:19.499 CC test/nvme/startup/startup.o 00:02:19.499 CC test/nvme/fused_ordering/fused_ordering.o 00:02:19.499 CC test/nvme/connect_stress/connect_stress.o 00:02:19.499 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:19.499 CC test/nvme/boot_partition/boot_partition.o 00:02:19.499 CC test/nvme/fdp/fdp.o 00:02:19.499 CC test/blobfs/mkfs/mkfs.o 00:02:19.499 CC test/accel/dif/dif.o 00:02:19.760 CC test/lvol/esnap/esnap.o 00:02:19.760 LINK err_injection 00:02:19.760 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:19.760 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.760 CC examples/nvme/hotplug/hotplug.o 00:02:19.760 CC examples/nvme/hello_world/hello_world.o 00:02:19.760 LINK startup 00:02:19.760 LINK boot_partition 00:02:19.760 LINK doorbell_aers 00:02:19.760 CC examples/nvme/reconnect/reconnect.o 00:02:19.760 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:19.760 CC examples/nvme/arbitration/arbitration.o 00:02:19.760 CC examples/nvme/abort/abort.o 00:02:19.760 LINK iscsi_fuzz 00:02:19.760 LINK connect_stress 00:02:19.760 LINK reserve 00:02:19.760 LINK fused_ordering 00:02:19.760 LINK simple_copy 00:02:19.760 LINK nvme_dp 00:02:19.760 LINK sgl 00:02:19.760 LINK mkfs 00:02:19.760 LINK reset 00:02:20.021 LINK aer 00:02:20.021 LINK overhead 00:02:20.021 LINK nvme_compliance 00:02:20.021 CC examples/accel/perf/accel_perf.o 00:02:20.021 LINK fdp 00:02:20.021 CC examples/blob/hello_world/hello_blob.o 00:02:20.021 CC examples/blob/cli/blobcli.o 00:02:20.021 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:20.021 LINK cmb_copy 00:02:20.021 LINK pmr_persistence 00:02:20.021 LINK hello_world 00:02:20.021 LINK hotplug 00:02:20.282 LINK reconnect 00:02:20.282 LINK arbitration 00:02:20.282 LINK abort 00:02:20.282 LINK dif 00:02:20.282 LINK hello_blob 00:02:20.282 LINK hello_fsdev 00:02:20.282 LINK nvme_manage 00:02:20.543 LINK accel_perf 00:02:20.543 LINK blobcli 00:02:20.804 LINK cuse 00:02:20.804 CC test/bdev/bdevio/bdevio.o 00:02:21.066 CC examples/bdev/hello_world/hello_bdev.o 00:02:21.066 CC examples/bdev/bdevperf/bdevperf.o 00:02:21.327 LINK hello_bdev 00:02:21.327 LINK bdevio 00:02:21.899 LINK bdevperf 00:02:22.472 CC examples/nvmf/nvmf/nvmf.o 00:02:22.732 LINK nvmf 00:02:24.116 LINK esnap 00:02:24.687 00:02:24.687 real 0m56.191s 00:02:24.687 user 8m6.006s 00:02:24.687 sys 5m26.161s 00:02:24.687 08:47:49 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:24.687 08:47:49 make -- common/autotest_common.sh@10 -- $ set +x 00:02:24.687 ************************************ 00:02:24.687 END TEST make 00:02:24.687 ************************************ 00:02:24.687 08:47:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:24.687 08:47:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:24.687 08:47:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:24.687 08:47:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.687 08:47:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:24.687 08:47:49 -- pm/common@44 -- $ pid=363257 00:02:24.687 08:47:49 -- pm/common@50 -- $ kill -TERM 363257 00:02:24.687 08:47:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.687 08:47:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:24.687 08:47:49 -- pm/common@44 -- $ pid=363258 00:02:24.687 08:47:49 -- pm/common@50 -- $ kill -TERM 363258 00:02:24.687 08:47:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.687 08:47:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:24.687 08:47:49 -- pm/common@44 -- $ pid=363260 00:02:24.687 08:47:49 -- pm/common@50 -- $ kill -TERM 363260 00:02:24.687 08:47:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.687 08:47:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:24.687 08:47:49 -- pm/common@44 -- $ pid=363284 00:02:24.687 08:47:49 -- pm/common@50 -- $ sudo -E kill -TERM 363284 00:02:24.687 08:47:49 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:24.687 08:47:49 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:24.687 08:47:50 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:24.687 08:47:50 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:24.687 08:47:50 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:24.687 08:47:50 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:24.687 08:47:50 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:24.687 08:47:50 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:24.687 08:47:50 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:24.687 08:47:50 -- scripts/common.sh@336 -- # IFS=.-: 00:02:24.687 08:47:50 -- scripts/common.sh@336 -- # read -ra ver1 00:02:24.687 08:47:50 -- scripts/common.sh@337 -- # IFS=.-: 00:02:24.687 08:47:50 -- scripts/common.sh@337 -- # read -ra ver2 00:02:24.687 08:47:50 -- scripts/common.sh@338 -- # local 'op=<' 00:02:24.687 08:47:50 -- scripts/common.sh@340 -- # ver1_l=2 00:02:24.687 08:47:50 -- scripts/common.sh@341 -- # ver2_l=1 00:02:24.687 08:47:50 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:24.687 08:47:50 -- scripts/common.sh@344 -- # case "$op" in 00:02:24.687 08:47:50 -- scripts/common.sh@345 -- # : 1 00:02:24.687 08:47:50 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:24.687 08:47:50 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:24.687 08:47:50 -- scripts/common.sh@365 -- # decimal 1 00:02:24.687 08:47:50 -- scripts/common.sh@353 -- # local d=1 00:02:24.687 08:47:50 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:24.687 08:47:50 -- scripts/common.sh@355 -- # echo 1 00:02:24.687 08:47:50 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:24.687 08:47:50 -- scripts/common.sh@366 -- # decimal 2 00:02:24.687 08:47:50 -- scripts/common.sh@353 -- # local d=2 00:02:24.687 08:47:50 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:24.687 08:47:50 -- scripts/common.sh@355 -- # echo 2 00:02:24.687 08:47:50 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:24.687 08:47:50 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:24.687 08:47:50 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:24.687 08:47:50 -- scripts/common.sh@368 -- # return 0 00:02:24.687 08:47:50 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:24.687 08:47:50 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:24.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:24.687 --rc genhtml_branch_coverage=1 00:02:24.687 --rc genhtml_function_coverage=1 00:02:24.687 --rc genhtml_legend=1 00:02:24.687 --rc geninfo_all_blocks=1 00:02:24.687 --rc geninfo_unexecuted_blocks=1 00:02:24.687 00:02:24.687 ' 00:02:24.687 08:47:50 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:24.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:24.687 --rc genhtml_branch_coverage=1 00:02:24.687 --rc genhtml_function_coverage=1 00:02:24.687 --rc genhtml_legend=1 00:02:24.687 --rc geninfo_all_blocks=1 00:02:24.687 --rc geninfo_unexecuted_blocks=1 00:02:24.687 00:02:24.687 ' 00:02:24.687 08:47:50 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:24.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:24.687 --rc genhtml_branch_coverage=1 00:02:24.688 --rc genhtml_function_coverage=1 00:02:24.688 --rc genhtml_legend=1 00:02:24.688 --rc geninfo_all_blocks=1 00:02:24.688 --rc geninfo_unexecuted_blocks=1 00:02:24.688 00:02:24.688 ' 00:02:24.688 08:47:50 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:24.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:24.688 --rc genhtml_branch_coverage=1 00:02:24.688 --rc genhtml_function_coverage=1 00:02:24.688 --rc genhtml_legend=1 00:02:24.688 --rc geninfo_all_blocks=1 00:02:24.688 --rc geninfo_unexecuted_blocks=1 00:02:24.688 00:02:24.688 ' 00:02:24.688 08:47:50 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:24.688 08:47:50 -- nvmf/common.sh@7 -- # uname -s 00:02:24.688 08:47:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:24.688 08:47:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:24.688 08:47:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:24.688 08:47:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:24.688 08:47:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:24.688 08:47:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:24.688 08:47:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:24.688 08:47:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:24.688 08:47:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:24.688 08:47:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:24.950 08:47:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:24.950 08:47:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:24.950 08:47:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:24.950 08:47:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:24.950 08:47:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:24.950 08:47:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:24.950 08:47:50 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:24.950 08:47:50 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:24.950 08:47:50 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:24.950 08:47:50 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:24.950 08:47:50 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:24.950 08:47:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.950 08:47:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.950 08:47:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.950 08:47:50 -- paths/export.sh@5 -- # export PATH 00:02:24.950 08:47:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.950 08:47:50 -- nvmf/common.sh@51 -- # : 0 00:02:24.950 08:47:50 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:24.950 08:47:50 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:24.950 08:47:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:24.950 08:47:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:24.950 08:47:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:24.950 08:47:50 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:24.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:24.950 08:47:50 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:24.950 08:47:50 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:24.950 08:47:50 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:24.950 08:47:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:24.950 08:47:50 -- spdk/autotest.sh@32 -- # uname -s 00:02:24.950 08:47:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:24.950 08:47:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:24.950 08:47:50 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:24.950 08:47:50 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:24.950 08:47:50 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:24.950 08:47:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:24.950 08:47:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:24.950 08:47:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:24.950 08:47:50 -- spdk/autotest.sh@48 -- # udevadm_pid=428845 00:02:24.950 08:47:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:24.950 08:47:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:24.950 08:47:50 -- pm/common@17 -- # local monitor 00:02:24.950 08:47:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.950 08:47:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.950 08:47:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.950 08:47:50 -- pm/common@21 -- # date +%s 00:02:24.950 08:47:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.950 08:47:50 -- pm/common@25 -- # sleep 1 00:02:24.950 08:47:50 -- pm/common@21 -- # date +%s 00:02:24.950 08:47:50 -- pm/common@21 -- # date +%s 00:02:24.950 08:47:50 -- pm/common@21 -- # date +%s 00:02:24.950 08:47:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732088870 00:02:24.950 08:47:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732088870 00:02:24.950 08:47:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732088870 00:02:24.950 08:47:50 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732088870 00:02:24.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732088870_collect-cpu-load.pm.log 00:02:24.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732088870_collect-vmstat.pm.log 00:02:24.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732088870_collect-cpu-temp.pm.log 00:02:24.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732088870_collect-bmc-pm.bmc.pm.log 00:02:25.895 08:47:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:25.895 08:47:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:25.895 08:47:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:25.895 08:47:51 -- common/autotest_common.sh@10 -- # set +x 00:02:25.895 08:47:51 -- spdk/autotest.sh@59 -- # create_test_list 00:02:25.895 08:47:51 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:25.895 08:47:51 -- common/autotest_common.sh@10 -- # set +x 00:02:25.895 08:47:51 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:25.895 08:47:51 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.895 08:47:51 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.895 08:47:51 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:25.895 08:47:51 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.895 08:47:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:25.895 08:47:51 -- common/autotest_common.sh@1457 -- # uname 00:02:25.895 08:47:51 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:25.895 08:47:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:25.895 08:47:51 -- common/autotest_common.sh@1477 -- # uname 00:02:25.895 08:47:51 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:25.895 08:47:51 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:25.895 08:47:51 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:25.895 lcov: LCOV version 1.15 00:02:26.156 08:47:51 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:41.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:41.067 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:59.183 08:48:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:59.183 08:48:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:59.183 08:48:21 -- common/autotest_common.sh@10 -- # set +x 00:02:59.183 08:48:21 -- spdk/autotest.sh@78 -- # rm -f 00:02:59.183 08:48:21 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.754 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:59.754 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:59.754 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:59.754 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:59.754 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:59.754 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:59.754 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:59.754 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:00.016 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:00.016 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:00.016 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:00.016 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:00.016 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:00.016 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:00.016 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:00.016 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:00.016 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:00.277 08:48:25 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:00.277 08:48:25 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:00.277 08:48:25 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:00.277 08:48:25 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:00.277 08:48:25 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:00.277 08:48:25 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:00.277 08:48:25 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:00.277 08:48:25 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:00.277 08:48:25 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:00.277 08:48:25 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:00.277 08:48:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:00.277 08:48:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:00.277 08:48:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:00.277 08:48:25 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:00.277 08:48:25 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:00.537 No valid GPT data, bailing 00:03:00.537 08:48:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:00.537 08:48:25 -- scripts/common.sh@394 -- # pt= 00:03:00.537 08:48:25 -- scripts/common.sh@395 -- # return 1 00:03:00.537 08:48:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:00.537 1+0 records in 00:03:00.537 1+0 records out 00:03:00.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457783 s, 229 MB/s 00:03:00.537 08:48:25 -- spdk/autotest.sh@105 -- # sync 00:03:00.537 08:48:25 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:00.537 08:48:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:00.537 08:48:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:10.535 08:48:34 -- spdk/autotest.sh@111 -- # uname -s 00:03:10.535 08:48:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:10.535 08:48:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:10.535 08:48:34 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:12.449 Hugepages 00:03:12.449 node hugesize free / total 00:03:12.449 node0 1048576kB 0 / 0 00:03:12.449 node0 2048kB 0 / 0 00:03:12.449 node1 1048576kB 0 / 0 00:03:12.449 node1 2048kB 0 / 0 00:03:12.449 00:03:12.449 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:12.449 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:12.449 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:12.449 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:12.449 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:12.449 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:12.449 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:12.449 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:12.449 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:12.710 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:12.710 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:12.710 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:12.710 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:12.710 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:12.710 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:12.710 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:12.710 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:12.710 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:12.710 08:48:38 -- spdk/autotest.sh@117 -- # uname -s 00:03:12.710 08:48:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:12.710 08:48:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:12.710 08:48:38 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.015 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:16.277 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:18.190 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:18.450 08:48:43 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:19.391 08:48:44 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:19.391 08:48:44 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:19.391 08:48:44 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:19.391 08:48:44 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:19.391 08:48:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:19.391 08:48:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:19.391 08:48:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:19.391 08:48:44 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:19.391 08:48:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:19.651 08:48:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:19.651 08:48:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:19.651 08:48:44 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.951 Waiting for block devices as requested 00:03:22.951 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:22.951 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:23.211 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:23.211 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:23.211 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:23.472 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:23.472 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:23.472 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:23.733 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:23.733 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:23.994 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:23.994 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:23.994 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:24.254 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:24.254 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:24.254 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:24.514 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:24.775 08:48:50 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:24.775 08:48:50 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:24.775 08:48:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:24.775 08:48:50 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:24.775 08:48:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:24.775 08:48:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:24.775 08:48:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:24.775 08:48:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:24.775 08:48:50 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:24.775 08:48:50 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:24.775 08:48:50 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:24.775 08:48:50 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:24.775 08:48:50 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:24.775 08:48:50 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:24.775 08:48:50 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:24.775 08:48:50 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:24.775 08:48:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:24.775 08:48:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:24.775 08:48:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:24.775 08:48:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:24.775 08:48:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:24.775 08:48:50 -- common/autotest_common.sh@1543 -- # continue 00:03:24.775 08:48:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:24.775 08:48:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:24.775 08:48:50 -- common/autotest_common.sh@10 -- # set +x 00:03:24.775 08:48:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:24.775 08:48:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:24.775 08:48:50 -- common/autotest_common.sh@10 -- # set +x 00:03:24.775 08:48:50 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.979 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:28.979 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:28.979 08:48:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:28.979 08:48:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:28.979 08:48:54 -- common/autotest_common.sh@10 -- # set +x 00:03:28.979 08:48:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:28.979 08:48:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:28.979 08:48:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:28.979 08:48:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:28.979 08:48:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:28.979 08:48:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:28.979 08:48:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:28.979 08:48:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:28.979 08:48:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:28.979 08:48:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:28.979 08:48:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:28.979 08:48:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:28.979 08:48:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:28.979 08:48:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:28.979 08:48:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:28.979 08:48:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:28.979 08:48:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:28.979 08:48:54 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:28.979 08:48:54 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:28.979 08:48:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:28.979 08:48:54 -- common/autotest_common.sh@1572 -- # return 0 00:03:28.979 08:48:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:28.979 08:48:54 -- common/autotest_common.sh@1580 -- # return 0 00:03:28.979 08:48:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:28.979 08:48:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:28.979 08:48:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:28.979 08:48:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:28.979 08:48:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:28.979 08:48:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:28.979 08:48:54 -- common/autotest_common.sh@10 -- # set +x 00:03:28.979 08:48:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:28.979 08:48:54 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:28.979 08:48:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.979 08:48:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.979 08:48:54 -- common/autotest_common.sh@10 -- # set +x 00:03:28.979 ************************************ 00:03:28.979 START TEST env 00:03:28.979 ************************************ 00:03:28.979 08:48:54 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:29.240 * Looking for test storage... 00:03:29.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:29.240 08:48:54 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:29.240 08:48:54 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:29.240 08:48:54 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:29.240 08:48:54 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:29.240 08:48:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:29.240 08:48:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:29.240 08:48:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:29.240 08:48:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.240 08:48:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:29.240 08:48:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:29.240 08:48:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:29.240 08:48:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:29.240 08:48:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:29.240 08:48:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:29.240 08:48:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.240 08:48:54 env -- scripts/common.sh@344 -- # case "$op" in 00:03:29.240 08:48:54 env -- scripts/common.sh@345 -- # : 1 00:03:29.240 08:48:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.240 08:48:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.240 08:48:54 env -- scripts/common.sh@365 -- # decimal 1 00:03:29.240 08:48:54 env -- scripts/common.sh@353 -- # local d=1 00:03:29.240 08:48:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.240 08:48:54 env -- scripts/common.sh@355 -- # echo 1 00:03:29.240 08:48:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.240 08:48:54 env -- scripts/common.sh@366 -- # decimal 2 00:03:29.240 08:48:54 env -- scripts/common.sh@353 -- # local d=2 00:03:29.240 08:48:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.240 08:48:54 env -- scripts/common.sh@355 -- # echo 2 00:03:29.240 08:48:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.240 08:48:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.240 08:48:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.240 08:48:54 env -- scripts/common.sh@368 -- # return 0 00:03:29.240 08:48:54 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.240 08:48:54 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.240 --rc genhtml_branch_coverage=1 00:03:29.240 --rc genhtml_function_coverage=1 00:03:29.240 --rc genhtml_legend=1 00:03:29.240 --rc geninfo_all_blocks=1 00:03:29.240 --rc geninfo_unexecuted_blocks=1 00:03:29.240 00:03:29.240 ' 00:03:29.240 08:48:54 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.240 --rc genhtml_branch_coverage=1 00:03:29.240 --rc genhtml_function_coverage=1 00:03:29.240 --rc genhtml_legend=1 00:03:29.240 --rc geninfo_all_blocks=1 00:03:29.240 --rc geninfo_unexecuted_blocks=1 00:03:29.240 00:03:29.240 ' 00:03:29.240 08:48:54 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.240 --rc genhtml_branch_coverage=1 00:03:29.240 --rc genhtml_function_coverage=1 00:03:29.240 --rc genhtml_legend=1 00:03:29.240 --rc geninfo_all_blocks=1 00:03:29.240 --rc geninfo_unexecuted_blocks=1 00:03:29.240 00:03:29.240 ' 00:03:29.240 08:48:54 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.240 --rc genhtml_branch_coverage=1 00:03:29.240 --rc genhtml_function_coverage=1 00:03:29.240 --rc genhtml_legend=1 00:03:29.240 --rc geninfo_all_blocks=1 00:03:29.240 --rc geninfo_unexecuted_blocks=1 00:03:29.240 00:03:29.240 ' 00:03:29.240 08:48:54 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:29.240 08:48:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.240 08:48:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.240 08:48:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.240 ************************************ 00:03:29.240 START TEST env_memory 00:03:29.241 ************************************ 00:03:29.241 08:48:54 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:29.241 00:03:29.241 00:03:29.241 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.241 http://cunit.sourceforge.net/ 00:03:29.241 00:03:29.241 00:03:29.241 Suite: memory 00:03:29.502 Test: alloc and free memory map ...[2024-11-20 08:48:54.767530] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:29.502 passed 00:03:29.502 Test: mem map translation ...[2024-11-20 08:48:54.793130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:29.502 [2024-11-20 08:48:54.793167] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:29.502 [2024-11-20 08:48:54.793216] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:29.502 [2024-11-20 08:48:54.793224] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:29.502 passed 00:03:29.502 Test: mem map registration ...[2024-11-20 08:48:54.848494] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:29.502 [2024-11-20 08:48:54.848518] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:29.502 passed 00:03:29.502 Test: mem map adjacent registrations ...passed 00:03:29.502 00:03:29.502 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.502 suites 1 1 n/a 0 0 00:03:29.502 tests 4 4 4 0 0 00:03:29.502 asserts 152 152 152 0 n/a 00:03:29.502 00:03:29.502 Elapsed time = 0.193 seconds 00:03:29.502 00:03:29.502 real 0m0.208s 00:03:29.502 user 0m0.195s 00:03:29.502 sys 0m0.012s 00:03:29.502 08:48:54 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.502 08:48:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:29.502 ************************************ 00:03:29.502 END TEST env_memory 00:03:29.502 ************************************ 00:03:29.502 08:48:54 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:29.502 08:48:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.502 08:48:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.502 08:48:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.502 ************************************ 00:03:29.502 START TEST env_vtophys 00:03:29.502 ************************************ 00:03:29.502 08:48:55 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:29.502 EAL: lib.eal log level changed from notice to debug 00:03:29.502 EAL: Detected lcore 0 as core 0 on socket 0 00:03:29.503 EAL: Detected lcore 1 as core 1 on socket 0 00:03:29.503 EAL: Detected lcore 2 as core 2 on socket 0 00:03:29.503 EAL: Detected lcore 3 as core 3 on socket 0 00:03:29.503 EAL: Detected lcore 4 as core 4 on socket 0 00:03:29.503 EAL: Detected lcore 5 as core 5 on socket 0 00:03:29.503 EAL: Detected lcore 6 as core 6 on socket 0 00:03:29.503 EAL: Detected lcore 7 as core 7 on socket 0 00:03:29.503 EAL: Detected lcore 8 as core 8 on socket 0 00:03:29.503 EAL: Detected lcore 9 as core 9 on socket 0 00:03:29.503 EAL: Detected lcore 10 as core 10 on socket 0 00:03:29.503 EAL: Detected lcore 11 as core 11 on socket 0 00:03:29.503 EAL: Detected lcore 12 as core 12 on socket 0 00:03:29.503 EAL: Detected lcore 13 as core 13 on socket 0 00:03:29.503 EAL: Detected lcore 14 as core 14 on socket 0 00:03:29.503 EAL: Detected lcore 15 as core 15 on socket 0 00:03:29.503 EAL: Detected lcore 16 as core 16 on socket 0 00:03:29.503 EAL: Detected lcore 17 as core 17 on socket 0 00:03:29.503 EAL: Detected lcore 18 as core 18 on socket 0 00:03:29.503 EAL: Detected lcore 19 as core 19 on socket 0 00:03:29.503 EAL: Detected lcore 20 as core 20 on socket 0 00:03:29.503 EAL: Detected lcore 21 as core 21 on socket 0 00:03:29.503 EAL: Detected lcore 22 as core 22 on socket 0 00:03:29.503 EAL: Detected lcore 23 as core 23 on socket 0 00:03:29.503 EAL: Detected lcore 24 as core 24 on socket 0 00:03:29.503 EAL: Detected lcore 25 as core 25 on socket 0 00:03:29.503 EAL: Detected lcore 26 as core 26 on socket 0 00:03:29.503 EAL: Detected lcore 27 as core 27 on socket 0 00:03:29.503 EAL: Detected lcore 28 as core 28 on socket 0 00:03:29.503 EAL: Detected lcore 29 as core 29 on socket 0 00:03:29.503 EAL: Detected lcore 30 as core 30 on socket 0 00:03:29.503 EAL: Detected lcore 31 as core 31 on socket 0 00:03:29.503 EAL: Detected lcore 32 as core 32 on socket 0 00:03:29.503 EAL: Detected lcore 33 as core 33 on socket 0 00:03:29.503 EAL: Detected lcore 34 as core 34 on socket 0 00:03:29.503 EAL: Detected lcore 35 as core 35 on socket 0 00:03:29.503 EAL: Detected lcore 36 as core 0 on socket 1 00:03:29.503 EAL: Detected lcore 37 as core 1 on socket 1 00:03:29.503 EAL: Detected lcore 38 as core 2 on socket 1 00:03:29.503 EAL: Detected lcore 39 as core 3 on socket 1 00:03:29.503 EAL: Detected lcore 40 as core 4 on socket 1 00:03:29.503 EAL: Detected lcore 41 as core 5 on socket 1 00:03:29.503 EAL: Detected lcore 42 as core 6 on socket 1 00:03:29.503 EAL: Detected lcore 43 as core 7 on socket 1 00:03:29.503 EAL: Detected lcore 44 as core 8 on socket 1 00:03:29.503 EAL: Detected lcore 45 as core 9 on socket 1 00:03:29.503 EAL: Detected lcore 46 as core 10 on socket 1 00:03:29.503 EAL: Detected lcore 47 as core 11 on socket 1 00:03:29.503 EAL: Detected lcore 48 as core 12 on socket 1 00:03:29.503 EAL: Detected lcore 49 as core 13 on socket 1 00:03:29.503 EAL: Detected lcore 50 as core 14 on socket 1 00:03:29.503 EAL: Detected lcore 51 as core 15 on socket 1 00:03:29.503 EAL: Detected lcore 52 as core 16 on socket 1 00:03:29.503 EAL: Detected lcore 53 as core 17 on socket 1 00:03:29.503 EAL: Detected lcore 54 as core 18 on socket 1 00:03:29.503 EAL: Detected lcore 55 as core 19 on socket 1 00:03:29.503 EAL: Detected lcore 56 as core 20 on socket 1 00:03:29.503 EAL: Detected lcore 57 as core 21 on socket 1 00:03:29.765 EAL: Detected lcore 58 as core 22 on socket 1 00:03:29.765 EAL: Detected lcore 59 as core 23 on socket 1 00:03:29.765 EAL: Detected lcore 60 as core 24 on socket 1 00:03:29.765 EAL: Detected lcore 61 as core 25 on socket 1 00:03:29.765 EAL: Detected lcore 62 as core 26 on socket 1 00:03:29.765 EAL: Detected lcore 63 as core 27 on socket 1 00:03:29.765 EAL: Detected lcore 64 as core 28 on socket 1 00:03:29.765 EAL: Detected lcore 65 as core 29 on socket 1 00:03:29.765 EAL: Detected lcore 66 as core 30 on socket 1 00:03:29.765 EAL: Detected lcore 67 as core 31 on socket 1 00:03:29.765 EAL: Detected lcore 68 as core 32 on socket 1 00:03:29.765 EAL: Detected lcore 69 as core 33 on socket 1 00:03:29.765 EAL: Detected lcore 70 as core 34 on socket 1 00:03:29.765 EAL: Detected lcore 71 as core 35 on socket 1 00:03:29.765 EAL: Detected lcore 72 as core 0 on socket 0 00:03:29.765 EAL: Detected lcore 73 as core 1 on socket 0 00:03:29.765 EAL: Detected lcore 74 as core 2 on socket 0 00:03:29.765 EAL: Detected lcore 75 as core 3 on socket 0 00:03:29.765 EAL: Detected lcore 76 as core 4 on socket 0 00:03:29.765 EAL: Detected lcore 77 as core 5 on socket 0 00:03:29.765 EAL: Detected lcore 78 as core 6 on socket 0 00:03:29.765 EAL: Detected lcore 79 as core 7 on socket 0 00:03:29.765 EAL: Detected lcore 80 as core 8 on socket 0 00:03:29.765 EAL: Detected lcore 81 as core 9 on socket 0 00:03:29.765 EAL: Detected lcore 82 as core 10 on socket 0 00:03:29.765 EAL: Detected lcore 83 as core 11 on socket 0 00:03:29.765 EAL: Detected lcore 84 as core 12 on socket 0 00:03:29.765 EAL: Detected lcore 85 as core 13 on socket 0 00:03:29.765 EAL: Detected lcore 86 as core 14 on socket 0 00:03:29.765 EAL: Detected lcore 87 as core 15 on socket 0 00:03:29.765 EAL: Detected lcore 88 as core 16 on socket 0 00:03:29.765 EAL: Detected lcore 89 as core 17 on socket 0 00:03:29.765 EAL: Detected lcore 90 as core 18 on socket 0 00:03:29.765 EAL: Detected lcore 91 as core 19 on socket 0 00:03:29.765 EAL: Detected lcore 92 as core 20 on socket 0 00:03:29.765 EAL: Detected lcore 93 as core 21 on socket 0 00:03:29.765 EAL: Detected lcore 94 as core 22 on socket 0 00:03:29.765 EAL: Detected lcore 95 as core 23 on socket 0 00:03:29.765 EAL: Detected lcore 96 as core 24 on socket 0 00:03:29.765 EAL: Detected lcore 97 as core 25 on socket 0 00:03:29.765 EAL: Detected lcore 98 as core 26 on socket 0 00:03:29.765 EAL: Detected lcore 99 as core 27 on socket 0 00:03:29.765 EAL: Detected lcore 100 as core 28 on socket 0 00:03:29.765 EAL: Detected lcore 101 as core 29 on socket 0 00:03:29.765 EAL: Detected lcore 102 as core 30 on socket 0 00:03:29.765 EAL: Detected lcore 103 as core 31 on socket 0 00:03:29.765 EAL: Detected lcore 104 as core 32 on socket 0 00:03:29.765 EAL: Detected lcore 105 as core 33 on socket 0 00:03:29.765 EAL: Detected lcore 106 as core 34 on socket 0 00:03:29.765 EAL: Detected lcore 107 as core 35 on socket 0 00:03:29.765 EAL: Detected lcore 108 as core 0 on socket 1 00:03:29.765 EAL: Detected lcore 109 as core 1 on socket 1 00:03:29.765 EAL: Detected lcore 110 as core 2 on socket 1 00:03:29.765 EAL: Detected lcore 111 as core 3 on socket 1 00:03:29.765 EAL: Detected lcore 112 as core 4 on socket 1 00:03:29.765 EAL: Detected lcore 113 as core 5 on socket 1 00:03:29.765 EAL: Detected lcore 114 as core 6 on socket 1 00:03:29.765 EAL: Detected lcore 115 as core 7 on socket 1 00:03:29.765 EAL: Detected lcore 116 as core 8 on socket 1 00:03:29.765 EAL: Detected lcore 117 as core 9 on socket 1 00:03:29.765 EAL: Detected lcore 118 as core 10 on socket 1 00:03:29.765 EAL: Detected lcore 119 as core 11 on socket 1 00:03:29.765 EAL: Detected lcore 120 as core 12 on socket 1 00:03:29.765 EAL: Detected lcore 121 as core 13 on socket 1 00:03:29.765 EAL: Detected lcore 122 as core 14 on socket 1 00:03:29.765 EAL: Detected lcore 123 as core 15 on socket 1 00:03:29.765 EAL: Detected lcore 124 as core 16 on socket 1 00:03:29.765 EAL: Detected lcore 125 as core 17 on socket 1 00:03:29.765 EAL: Detected lcore 126 as core 18 on socket 1 00:03:29.765 EAL: Detected lcore 127 as core 19 on socket 1 00:03:29.765 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:29.765 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:29.765 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:29.765 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:29.765 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:29.765 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:29.765 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:29.765 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:29.765 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:29.765 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:29.765 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:29.765 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:29.765 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:29.765 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:29.765 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:29.765 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:29.765 EAL: Maximum logical cores by configuration: 128 00:03:29.765 EAL: Detected CPU lcores: 128 00:03:29.765 EAL: Detected NUMA nodes: 2 00:03:29.765 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:29.765 EAL: Detected shared linkage of DPDK 00:03:29.765 EAL: No shared files mode enabled, IPC will be disabled 00:03:29.765 EAL: Bus pci wants IOVA as 'DC' 00:03:29.765 EAL: Buses did not request a specific IOVA mode. 00:03:29.765 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:29.765 EAL: Selected IOVA mode 'VA' 00:03:29.765 EAL: Probing VFIO support... 00:03:29.765 EAL: IOMMU type 1 (Type 1) is supported 00:03:29.765 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:29.765 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:29.765 EAL: VFIO support initialized 00:03:29.765 EAL: Ask a virtual area of 0x2e000 bytes 00:03:29.765 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:29.765 EAL: Setting up physically contiguous memory... 00:03:29.765 EAL: Setting maximum number of open files to 524288 00:03:29.765 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:29.765 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:29.765 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:29.765 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.765 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:29.765 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.765 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.765 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:29.765 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:29.765 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.765 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:29.765 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.765 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.765 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:29.765 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:29.765 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.765 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:29.765 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.765 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.765 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:29.765 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:29.765 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.765 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:29.766 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.766 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:29.766 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:29.766 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:29.766 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.766 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:29.766 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:29.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.766 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:29.766 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:29.766 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.766 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:29.766 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:29.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.766 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:29.766 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:29.766 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.766 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:29.766 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:29.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.766 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:29.766 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:29.766 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.766 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:29.766 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:29.766 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.766 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:29.766 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:29.766 EAL: Hugepages will be freed exactly as allocated. 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: TSC frequency is ~2400000 KHz 00:03:29.766 EAL: Main lcore 0 is ready (tid=7fac1ea89a00;cpuset=[0]) 00:03:29.766 EAL: Trying to obtain current memory policy. 00:03:29.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.766 EAL: Restoring previous memory policy: 0 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was expanded by 2MB 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:29.766 EAL: Mem event callback 'spdk:(nil)' registered 00:03:29.766 00:03:29.766 00:03:29.766 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.766 http://cunit.sourceforge.net/ 00:03:29.766 00:03:29.766 00:03:29.766 Suite: components_suite 00:03:29.766 Test: vtophys_malloc_test ...passed 00:03:29.766 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:29.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.766 EAL: Restoring previous memory policy: 4 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was expanded by 4MB 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was shrunk by 4MB 00:03:29.766 EAL: Trying to obtain current memory policy. 00:03:29.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.766 EAL: Restoring previous memory policy: 4 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was expanded by 6MB 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was shrunk by 6MB 00:03:29.766 EAL: Trying to obtain current memory policy. 00:03:29.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.766 EAL: Restoring previous memory policy: 4 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was expanded by 10MB 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was shrunk by 10MB 00:03:29.766 EAL: Trying to obtain current memory policy. 00:03:29.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.766 EAL: Restoring previous memory policy: 4 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was expanded by 18MB 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was shrunk by 18MB 00:03:29.766 EAL: Trying to obtain current memory policy. 00:03:29.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.766 EAL: Restoring previous memory policy: 4 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was expanded by 34MB 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was shrunk by 34MB 00:03:29.766 EAL: Trying to obtain current memory policy. 00:03:29.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.766 EAL: Restoring previous memory policy: 4 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was expanded by 66MB 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was shrunk by 66MB 00:03:29.766 EAL: Trying to obtain current memory policy. 00:03:29.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.766 EAL: Restoring previous memory policy: 4 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was expanded by 130MB 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was shrunk by 130MB 00:03:29.766 EAL: Trying to obtain current memory policy. 00:03:29.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.766 EAL: Restoring previous memory policy: 4 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.766 EAL: request: mp_malloc_sync 00:03:29.766 EAL: No shared files mode enabled, IPC is disabled 00:03:29.766 EAL: Heap on socket 0 was expanded by 258MB 00:03:29.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.026 EAL: request: mp_malloc_sync 00:03:30.026 EAL: No shared files mode enabled, IPC is disabled 00:03:30.026 EAL: Heap on socket 0 was shrunk by 258MB 00:03:30.026 EAL: Trying to obtain current memory policy. 00:03:30.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.026 EAL: Restoring previous memory policy: 4 00:03:30.026 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.026 EAL: request: mp_malloc_sync 00:03:30.026 EAL: No shared files mode enabled, IPC is disabled 00:03:30.026 EAL: Heap on socket 0 was expanded by 514MB 00:03:30.026 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.026 EAL: request: mp_malloc_sync 00:03:30.026 EAL: No shared files mode enabled, IPC is disabled 00:03:30.026 EAL: Heap on socket 0 was shrunk by 514MB 00:03:30.026 EAL: Trying to obtain current memory policy. 00:03:30.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.356 EAL: Restoring previous memory policy: 4 00:03:30.356 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.356 EAL: request: mp_malloc_sync 00:03:30.356 EAL: No shared files mode enabled, IPC is disabled 00:03:30.356 EAL: Heap on socket 0 was expanded by 1026MB 00:03:30.356 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.356 EAL: request: mp_malloc_sync 00:03:30.356 EAL: No shared files mode enabled, IPC is disabled 00:03:30.356 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:30.356 passed 00:03:30.356 00:03:30.356 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.356 suites 1 1 n/a 0 0 00:03:30.356 tests 2 2 2 0 0 00:03:30.356 asserts 497 497 497 0 n/a 00:03:30.356 00:03:30.356 Elapsed time = 0.684 seconds 00:03:30.356 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.356 EAL: request: mp_malloc_sync 00:03:30.356 EAL: No shared files mode enabled, IPC is disabled 00:03:30.356 EAL: Heap on socket 0 was shrunk by 2MB 00:03:30.356 EAL: No shared files mode enabled, IPC is disabled 00:03:30.356 EAL: No shared files mode enabled, IPC is disabled 00:03:30.356 EAL: No shared files mode enabled, IPC is disabled 00:03:30.356 00:03:30.356 real 0m0.839s 00:03:30.356 user 0m0.426s 00:03:30.356 sys 0m0.383s 00:03:30.356 08:48:55 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.356 08:48:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:30.356 ************************************ 00:03:30.356 END TEST env_vtophys 00:03:30.356 ************************************ 00:03:30.649 08:48:55 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:30.649 08:48:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.649 08:48:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.649 08:48:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:30.649 ************************************ 00:03:30.649 START TEST env_pci 00:03:30.649 ************************************ 00:03:30.649 08:48:55 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:30.649 00:03:30.649 00:03:30.649 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.649 http://cunit.sourceforge.net/ 00:03:30.649 00:03:30.649 00:03:30.649 Suite: pci 00:03:30.649 Test: pci_hook ...[2024-11-20 08:48:55.937564] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 448855 has claimed it 00:03:30.649 EAL: Cannot find device (10000:00:01.0) 00:03:30.649 EAL: Failed to attach device on primary process 00:03:30.649 passed 00:03:30.649 00:03:30.649 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.649 suites 1 1 n/a 0 0 00:03:30.649 tests 1 1 1 0 0 00:03:30.649 asserts 25 25 25 0 n/a 00:03:30.649 00:03:30.649 Elapsed time = 0.030 seconds 00:03:30.649 00:03:30.649 real 0m0.051s 00:03:30.649 user 0m0.015s 00:03:30.649 sys 0m0.036s 00:03:30.649 08:48:55 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.649 08:48:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:30.649 ************************************ 00:03:30.649 END TEST env_pci 00:03:30.649 ************************************ 00:03:30.649 08:48:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:30.649 08:48:56 env -- env/env.sh@15 -- # uname 00:03:30.649 08:48:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:30.649 08:48:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:30.649 08:48:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:30.649 08:48:56 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:30.649 08:48:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.649 08:48:56 env -- common/autotest_common.sh@10 -- # set +x 00:03:30.649 ************************************ 00:03:30.649 START TEST env_dpdk_post_init 00:03:30.649 ************************************ 00:03:30.649 08:48:56 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:30.649 EAL: Detected CPU lcores: 128 00:03:30.649 EAL: Detected NUMA nodes: 2 00:03:30.649 EAL: Detected shared linkage of DPDK 00:03:30.649 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:30.649 EAL: Selected IOVA mode 'VA' 00:03:30.649 EAL: VFIO support initialized 00:03:30.649 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:30.918 EAL: Using IOMMU type 1 (Type 1) 00:03:30.918 EAL: Ignore mapping IO port bar(1) 00:03:30.918 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:31.178 EAL: Ignore mapping IO port bar(1) 00:03:31.178 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:31.440 EAL: Ignore mapping IO port bar(1) 00:03:31.440 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:31.700 EAL: Ignore mapping IO port bar(1) 00:03:31.700 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:31.700 EAL: Ignore mapping IO port bar(1) 00:03:31.961 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:31.961 EAL: Ignore mapping IO port bar(1) 00:03:32.222 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:32.222 EAL: Ignore mapping IO port bar(1) 00:03:32.484 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:32.484 EAL: Ignore mapping IO port bar(1) 00:03:32.484 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:32.744 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:33.006 EAL: Ignore mapping IO port bar(1) 00:03:33.006 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:33.267 EAL: Ignore mapping IO port bar(1) 00:03:33.267 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:33.267 EAL: Ignore mapping IO port bar(1) 00:03:33.528 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:33.528 EAL: Ignore mapping IO port bar(1) 00:03:33.788 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:33.788 EAL: Ignore mapping IO port bar(1) 00:03:34.048 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:34.048 EAL: Ignore mapping IO port bar(1) 00:03:34.049 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:34.309 EAL: Ignore mapping IO port bar(1) 00:03:34.309 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:34.570 EAL: Ignore mapping IO port bar(1) 00:03:34.570 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:34.570 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:34.570 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:34.831 Starting DPDK initialization... 00:03:34.831 Starting SPDK post initialization... 00:03:34.831 SPDK NVMe probe 00:03:34.831 Attaching to 0000:65:00.0 00:03:34.831 Attached to 0000:65:00.0 00:03:34.831 Cleaning up... 00:03:36.749 00:03:36.749 real 0m5.748s 00:03:36.749 user 0m0.113s 00:03:36.749 sys 0m0.193s 00:03:36.749 08:49:01 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.749 08:49:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:36.749 ************************************ 00:03:36.749 END TEST env_dpdk_post_init 00:03:36.749 ************************************ 00:03:36.749 08:49:01 env -- env/env.sh@26 -- # uname 00:03:36.749 08:49:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:36.749 08:49:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:36.749 08:49:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.749 08:49:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.749 08:49:01 env -- common/autotest_common.sh@10 -- # set +x 00:03:36.749 ************************************ 00:03:36.749 START TEST env_mem_callbacks 00:03:36.749 ************************************ 00:03:36.749 08:49:01 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:36.749 EAL: Detected CPU lcores: 128 00:03:36.749 EAL: Detected NUMA nodes: 2 00:03:36.749 EAL: Detected shared linkage of DPDK 00:03:36.749 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:36.749 EAL: Selected IOVA mode 'VA' 00:03:36.749 EAL: VFIO support initialized 00:03:36.749 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:36.749 00:03:36.749 00:03:36.749 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.749 http://cunit.sourceforge.net/ 00:03:36.749 00:03:36.749 00:03:36.749 Suite: memory 00:03:36.749 Test: test ... 00:03:36.749 register 0x200000200000 2097152 00:03:36.749 malloc 3145728 00:03:36.749 register 0x200000400000 4194304 00:03:36.749 buf 0x200000500000 len 3145728 PASSED 00:03:36.749 malloc 64 00:03:36.749 buf 0x2000004fff40 len 64 PASSED 00:03:36.749 malloc 4194304 00:03:36.749 register 0x200000800000 6291456 00:03:36.749 buf 0x200000a00000 len 4194304 PASSED 00:03:36.749 free 0x200000500000 3145728 00:03:36.749 free 0x2000004fff40 64 00:03:36.749 unregister 0x200000400000 4194304 PASSED 00:03:36.749 free 0x200000a00000 4194304 00:03:36.749 unregister 0x200000800000 6291456 PASSED 00:03:36.749 malloc 8388608 00:03:36.749 register 0x200000400000 10485760 00:03:36.749 buf 0x200000600000 len 8388608 PASSED 00:03:36.749 free 0x200000600000 8388608 00:03:36.749 unregister 0x200000400000 10485760 PASSED 00:03:36.749 passed 00:03:36.749 00:03:36.749 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.749 suites 1 1 n/a 0 0 00:03:36.749 tests 1 1 1 0 0 00:03:36.749 asserts 15 15 15 0 n/a 00:03:36.749 00:03:36.749 Elapsed time = 0.010 seconds 00:03:36.749 00:03:36.749 real 0m0.069s 00:03:36.749 user 0m0.021s 00:03:36.749 sys 0m0.049s 00:03:36.749 08:49:01 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.749 08:49:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:36.749 ************************************ 00:03:36.749 END TEST env_mem_callbacks 00:03:36.749 ************************************ 00:03:36.749 00:03:36.749 real 0m7.535s 00:03:36.749 user 0m1.033s 00:03:36.749 sys 0m1.062s 00:03:36.749 08:49:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.749 08:49:02 env -- common/autotest_common.sh@10 -- # set +x 00:03:36.749 ************************************ 00:03:36.749 END TEST env 00:03:36.749 ************************************ 00:03:36.749 08:49:02 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:36.749 08:49:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.749 08:49:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.749 08:49:02 -- common/autotest_common.sh@10 -- # set +x 00:03:36.749 ************************************ 00:03:36.749 START TEST rpc 00:03:36.749 ************************************ 00:03:36.749 08:49:02 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:36.749 * Looking for test storage... 00:03:36.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:36.749 08:49:02 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:36.749 08:49:02 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:36.749 08:49:02 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:36.749 08:49:02 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:36.749 08:49:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:36.749 08:49:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:36.749 08:49:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:36.749 08:49:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.010 08:49:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.010 08:49:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.010 08:49:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.010 08:49:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.010 08:49:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.010 08:49:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.010 08:49:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.010 08:49:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:37.010 08:49:02 rpc -- scripts/common.sh@345 -- # : 1 00:03:37.010 08:49:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.010 08:49:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.010 08:49:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:37.010 08:49:02 rpc -- scripts/common.sh@353 -- # local d=1 00:03:37.010 08:49:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.010 08:49:02 rpc -- scripts/common.sh@355 -- # echo 1 00:03:37.011 08:49:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.011 08:49:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:37.011 08:49:02 rpc -- scripts/common.sh@353 -- # local d=2 00:03:37.011 08:49:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.011 08:49:02 rpc -- scripts/common.sh@355 -- # echo 2 00:03:37.011 08:49:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.011 08:49:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.011 08:49:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.011 08:49:02 rpc -- scripts/common.sh@368 -- # return 0 00:03:37.011 08:49:02 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.011 08:49:02 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:37.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.011 --rc genhtml_branch_coverage=1 00:03:37.011 --rc genhtml_function_coverage=1 00:03:37.011 --rc genhtml_legend=1 00:03:37.011 --rc geninfo_all_blocks=1 00:03:37.011 --rc geninfo_unexecuted_blocks=1 00:03:37.011 00:03:37.011 ' 00:03:37.011 08:49:02 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:37.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.011 --rc genhtml_branch_coverage=1 00:03:37.011 --rc genhtml_function_coverage=1 00:03:37.011 --rc genhtml_legend=1 00:03:37.011 --rc geninfo_all_blocks=1 00:03:37.011 --rc geninfo_unexecuted_blocks=1 00:03:37.011 00:03:37.011 ' 00:03:37.011 08:49:02 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:37.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.011 --rc genhtml_branch_coverage=1 00:03:37.011 --rc genhtml_function_coverage=1 00:03:37.011 --rc genhtml_legend=1 00:03:37.011 --rc geninfo_all_blocks=1 00:03:37.011 --rc geninfo_unexecuted_blocks=1 00:03:37.011 00:03:37.011 ' 00:03:37.011 08:49:02 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:37.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.011 --rc genhtml_branch_coverage=1 00:03:37.011 --rc genhtml_function_coverage=1 00:03:37.011 --rc genhtml_legend=1 00:03:37.011 --rc geninfo_all_blocks=1 00:03:37.011 --rc geninfo_unexecuted_blocks=1 00:03:37.011 00:03:37.011 ' 00:03:37.011 08:49:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=450168 00:03:37.011 08:49:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:37.011 08:49:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 450168 00:03:37.011 08:49:02 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:37.011 08:49:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 450168 ']' 00:03:37.011 08:49:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.011 08:49:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:37.011 08:49:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.011 08:49:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:37.011 08:49:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.011 [2024-11-20 08:49:02.364062] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:03:37.011 [2024-11-20 08:49:02.364136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450168 ] 00:03:37.011 [2024-11-20 08:49:02.457874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.011 [2024-11-20 08:49:02.509706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:37.011 [2024-11-20 08:49:02.509752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 450168' to capture a snapshot of events at runtime. 00:03:37.011 [2024-11-20 08:49:02.509760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:37.011 [2024-11-20 08:49:02.509767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:37.011 [2024-11-20 08:49:02.509774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid450168 for offline analysis/debug. 00:03:37.011 [2024-11-20 08:49:02.510572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.955 08:49:03 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:37.955 08:49:03 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:37.955 08:49:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:37.955 08:49:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:37.955 08:49:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:37.955 08:49:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:37.955 08:49:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:37.955 08:49:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.955 08:49:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.955 ************************************ 00:03:37.955 START TEST rpc_integrity 00:03:37.955 ************************************ 00:03:37.955 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:37.955 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:37.955 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.955 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.955 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.955 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:37.955 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:37.955 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:37.955 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:37.955 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.955 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.955 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.955 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:37.955 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:37.955 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.955 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.955 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.955 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:37.955 { 00:03:37.955 "name": "Malloc0", 00:03:37.955 "aliases": [ 00:03:37.955 "3b2f07b9-b973-445a-8423-a198c1ebd6c7" 00:03:37.955 ], 00:03:37.955 "product_name": "Malloc disk", 00:03:37.955 "block_size": 512, 00:03:37.955 "num_blocks": 16384, 00:03:37.955 "uuid": "3b2f07b9-b973-445a-8423-a198c1ebd6c7", 00:03:37.955 "assigned_rate_limits": { 00:03:37.955 "rw_ios_per_sec": 0, 00:03:37.955 "rw_mbytes_per_sec": 0, 00:03:37.955 "r_mbytes_per_sec": 0, 00:03:37.955 "w_mbytes_per_sec": 0 00:03:37.955 }, 00:03:37.955 "claimed": false, 00:03:37.955 "zoned": false, 00:03:37.955 "supported_io_types": { 00:03:37.955 "read": true, 00:03:37.955 "write": true, 00:03:37.955 "unmap": true, 00:03:37.955 "flush": true, 00:03:37.956 "reset": true, 00:03:37.956 "nvme_admin": false, 00:03:37.956 "nvme_io": false, 00:03:37.956 "nvme_io_md": false, 00:03:37.956 "write_zeroes": true, 00:03:37.956 "zcopy": true, 00:03:37.956 "get_zone_info": false, 00:03:37.956 "zone_management": false, 00:03:37.956 "zone_append": false, 00:03:37.956 "compare": false, 00:03:37.956 "compare_and_write": false, 00:03:37.956 "abort": true, 00:03:37.956 "seek_hole": false, 00:03:37.956 "seek_data": false, 00:03:37.956 "copy": true, 00:03:37.956 "nvme_iov_md": false 00:03:37.956 }, 00:03:37.956 "memory_domains": [ 00:03:37.956 { 00:03:37.956 "dma_device_id": "system", 00:03:37.956 "dma_device_type": 1 00:03:37.956 }, 00:03:37.956 { 00:03:37.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.956 "dma_device_type": 2 00:03:37.956 } 00:03:37.956 ], 00:03:37.956 "driver_specific": {} 00:03:37.956 } 00:03:37.956 ]' 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.956 [2024-11-20 08:49:03.323381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:37.956 [2024-11-20 08:49:03.323428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:37.956 [2024-11-20 08:49:03.323445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x249edb0 00:03:37.956 [2024-11-20 08:49:03.323453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:37.956 [2024-11-20 08:49:03.325010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:37.956 [2024-11-20 08:49:03.325047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:37.956 Passthru0 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:37.956 { 00:03:37.956 "name": "Malloc0", 00:03:37.956 "aliases": [ 00:03:37.956 "3b2f07b9-b973-445a-8423-a198c1ebd6c7" 00:03:37.956 ], 00:03:37.956 "product_name": "Malloc disk", 00:03:37.956 "block_size": 512, 00:03:37.956 "num_blocks": 16384, 00:03:37.956 "uuid": "3b2f07b9-b973-445a-8423-a198c1ebd6c7", 00:03:37.956 "assigned_rate_limits": { 00:03:37.956 "rw_ios_per_sec": 0, 00:03:37.956 "rw_mbytes_per_sec": 0, 00:03:37.956 "r_mbytes_per_sec": 0, 00:03:37.956 "w_mbytes_per_sec": 0 00:03:37.956 }, 00:03:37.956 "claimed": true, 00:03:37.956 "claim_type": "exclusive_write", 00:03:37.956 "zoned": false, 00:03:37.956 "supported_io_types": { 00:03:37.956 "read": true, 00:03:37.956 "write": true, 00:03:37.956 "unmap": true, 00:03:37.956 "flush": true, 00:03:37.956 "reset": true, 00:03:37.956 "nvme_admin": false, 00:03:37.956 "nvme_io": false, 00:03:37.956 "nvme_io_md": false, 00:03:37.956 "write_zeroes": true, 00:03:37.956 "zcopy": true, 00:03:37.956 "get_zone_info": false, 00:03:37.956 "zone_management": false, 00:03:37.956 "zone_append": false, 00:03:37.956 "compare": false, 00:03:37.956 "compare_and_write": false, 00:03:37.956 "abort": true, 00:03:37.956 "seek_hole": false, 00:03:37.956 "seek_data": false, 00:03:37.956 "copy": true, 00:03:37.956 "nvme_iov_md": false 00:03:37.956 }, 00:03:37.956 "memory_domains": [ 00:03:37.956 { 00:03:37.956 "dma_device_id": "system", 00:03:37.956 "dma_device_type": 1 00:03:37.956 }, 00:03:37.956 { 00:03:37.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.956 "dma_device_type": 2 00:03:37.956 } 00:03:37.956 ], 00:03:37.956 "driver_specific": {} 00:03:37.956 }, 00:03:37.956 { 00:03:37.956 "name": "Passthru0", 00:03:37.956 "aliases": [ 00:03:37.956 "55643041-8eb0-510b-94fa-e63fb69e18ee" 00:03:37.956 ], 00:03:37.956 "product_name": "passthru", 00:03:37.956 "block_size": 512, 00:03:37.956 "num_blocks": 16384, 00:03:37.956 "uuid": "55643041-8eb0-510b-94fa-e63fb69e18ee", 00:03:37.956 "assigned_rate_limits": { 00:03:37.956 "rw_ios_per_sec": 0, 00:03:37.956 "rw_mbytes_per_sec": 0, 00:03:37.956 "r_mbytes_per_sec": 0, 00:03:37.956 "w_mbytes_per_sec": 0 00:03:37.956 }, 00:03:37.956 "claimed": false, 00:03:37.956 "zoned": false, 00:03:37.956 "supported_io_types": { 00:03:37.956 "read": true, 00:03:37.956 "write": true, 00:03:37.956 "unmap": true, 00:03:37.956 "flush": true, 00:03:37.956 "reset": true, 00:03:37.956 "nvme_admin": false, 00:03:37.956 "nvme_io": false, 00:03:37.956 "nvme_io_md": false, 00:03:37.956 "write_zeroes": true, 00:03:37.956 "zcopy": true, 00:03:37.956 "get_zone_info": false, 00:03:37.956 "zone_management": false, 00:03:37.956 "zone_append": false, 00:03:37.956 "compare": false, 00:03:37.956 "compare_and_write": false, 00:03:37.956 "abort": true, 00:03:37.956 "seek_hole": false, 00:03:37.956 "seek_data": false, 00:03:37.956 "copy": true, 00:03:37.956 "nvme_iov_md": false 00:03:37.956 }, 00:03:37.956 "memory_domains": [ 00:03:37.956 { 00:03:37.956 "dma_device_id": "system", 00:03:37.956 "dma_device_type": 1 00:03:37.956 }, 00:03:37.956 { 00:03:37.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.956 "dma_device_type": 2 00:03:37.956 } 00:03:37.956 ], 00:03:37.956 "driver_specific": { 00:03:37.956 "passthru": { 00:03:37.956 "name": "Passthru0", 00:03:37.956 "base_bdev_name": "Malloc0" 00:03:37.956 } 00:03:37.956 } 00:03:37.956 } 00:03:37.956 ]' 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.956 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:37.956 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:38.218 08:49:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:38.218 00:03:38.218 real 0m0.295s 00:03:38.218 user 0m0.186s 00:03:38.218 sys 0m0.044s 00:03:38.218 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.218 08:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.218 ************************************ 00:03:38.218 END TEST rpc_integrity 00:03:38.218 ************************************ 00:03:38.218 08:49:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:38.218 08:49:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.218 08:49:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.218 08:49:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.218 ************************************ 00:03:38.218 START TEST rpc_plugins 00:03:38.218 ************************************ 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:38.218 08:49:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.218 08:49:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:38.218 08:49:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.218 08:49:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:38.218 { 00:03:38.218 "name": "Malloc1", 00:03:38.218 "aliases": [ 00:03:38.218 "1835d0b5-9057-400a-8149-f55e15333dc0" 00:03:38.218 ], 00:03:38.218 "product_name": "Malloc disk", 00:03:38.218 "block_size": 4096, 00:03:38.218 "num_blocks": 256, 00:03:38.218 "uuid": "1835d0b5-9057-400a-8149-f55e15333dc0", 00:03:38.218 "assigned_rate_limits": { 00:03:38.218 "rw_ios_per_sec": 0, 00:03:38.218 "rw_mbytes_per_sec": 0, 00:03:38.218 "r_mbytes_per_sec": 0, 00:03:38.218 "w_mbytes_per_sec": 0 00:03:38.218 }, 00:03:38.218 "claimed": false, 00:03:38.218 "zoned": false, 00:03:38.218 "supported_io_types": { 00:03:38.218 "read": true, 00:03:38.218 "write": true, 00:03:38.218 "unmap": true, 00:03:38.218 "flush": true, 00:03:38.218 "reset": true, 00:03:38.218 "nvme_admin": false, 00:03:38.218 "nvme_io": false, 00:03:38.218 "nvme_io_md": false, 00:03:38.218 "write_zeroes": true, 00:03:38.218 "zcopy": true, 00:03:38.218 "get_zone_info": false, 00:03:38.218 "zone_management": false, 00:03:38.218 "zone_append": false, 00:03:38.218 "compare": false, 00:03:38.218 "compare_and_write": false, 00:03:38.218 "abort": true, 00:03:38.218 "seek_hole": false, 00:03:38.218 "seek_data": false, 00:03:38.218 "copy": true, 00:03:38.218 "nvme_iov_md": false 00:03:38.218 }, 00:03:38.218 "memory_domains": [ 00:03:38.218 { 00:03:38.218 "dma_device_id": "system", 00:03:38.218 "dma_device_type": 1 00:03:38.218 }, 00:03:38.218 { 00:03:38.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.218 "dma_device_type": 2 00:03:38.218 } 00:03:38.218 ], 00:03:38.218 "driver_specific": {} 00:03:38.218 } 00:03:38.218 ]' 00:03:38.218 08:49:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:38.218 08:49:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:38.218 08:49:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.218 08:49:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.218 08:49:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:38.218 08:49:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:38.218 08:49:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:38.218 00:03:38.218 real 0m0.153s 00:03:38.218 user 0m0.091s 00:03:38.218 sys 0m0.025s 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.218 08:49:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.218 ************************************ 00:03:38.218 END TEST rpc_plugins 00:03:38.218 ************************************ 00:03:38.480 08:49:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:38.480 08:49:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.480 08:49:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.480 08:49:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.480 ************************************ 00:03:38.480 START TEST rpc_trace_cmd_test 00:03:38.480 ************************************ 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:38.480 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid450168", 00:03:38.480 "tpoint_group_mask": "0x8", 00:03:38.480 "iscsi_conn": { 00:03:38.480 "mask": "0x2", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "scsi": { 00:03:38.480 "mask": "0x4", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "bdev": { 00:03:38.480 "mask": "0x8", 00:03:38.480 "tpoint_mask": "0xffffffffffffffff" 00:03:38.480 }, 00:03:38.480 "nvmf_rdma": { 00:03:38.480 "mask": "0x10", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "nvmf_tcp": { 00:03:38.480 "mask": "0x20", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "ftl": { 00:03:38.480 "mask": "0x40", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "blobfs": { 00:03:38.480 "mask": "0x80", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "dsa": { 00:03:38.480 "mask": "0x200", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "thread": { 00:03:38.480 "mask": "0x400", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "nvme_pcie": { 00:03:38.480 "mask": "0x800", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "iaa": { 00:03:38.480 "mask": "0x1000", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "nvme_tcp": { 00:03:38.480 "mask": "0x2000", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "bdev_nvme": { 00:03:38.480 "mask": "0x4000", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "sock": { 00:03:38.480 "mask": "0x8000", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "blob": { 00:03:38.480 "mask": "0x10000", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "bdev_raid": { 00:03:38.480 "mask": "0x20000", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 }, 00:03:38.480 "scheduler": { 00:03:38.480 "mask": "0x40000", 00:03:38.480 "tpoint_mask": "0x0" 00:03:38.480 } 00:03:38.480 }' 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:38.480 08:49:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:38.742 08:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:38.742 08:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:38.742 08:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:38.742 00:03:38.742 real 0m0.251s 00:03:38.742 user 0m0.208s 00:03:38.742 sys 0m0.035s 00:03:38.742 08:49:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.742 08:49:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:38.742 ************************************ 00:03:38.742 END TEST rpc_trace_cmd_test 00:03:38.742 ************************************ 00:03:38.742 08:49:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:38.742 08:49:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:38.742 08:49:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:38.742 08:49:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.742 08:49:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.742 08:49:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.742 ************************************ 00:03:38.742 START TEST rpc_daemon_integrity 00:03:38.742 ************************************ 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:38.742 { 00:03:38.742 "name": "Malloc2", 00:03:38.742 "aliases": [ 00:03:38.742 "053b85fc-7139-4dfa-98a5-69d6d50f917e" 00:03:38.742 ], 00:03:38.742 "product_name": "Malloc disk", 00:03:38.742 "block_size": 512, 00:03:38.742 "num_blocks": 16384, 00:03:38.742 "uuid": "053b85fc-7139-4dfa-98a5-69d6d50f917e", 00:03:38.742 "assigned_rate_limits": { 00:03:38.742 "rw_ios_per_sec": 0, 00:03:38.742 "rw_mbytes_per_sec": 0, 00:03:38.742 "r_mbytes_per_sec": 0, 00:03:38.742 "w_mbytes_per_sec": 0 00:03:38.742 }, 00:03:38.742 "claimed": false, 00:03:38.742 "zoned": false, 00:03:38.742 "supported_io_types": { 00:03:38.742 "read": true, 00:03:38.742 "write": true, 00:03:38.742 "unmap": true, 00:03:38.742 "flush": true, 00:03:38.742 "reset": true, 00:03:38.742 "nvme_admin": false, 00:03:38.742 "nvme_io": false, 00:03:38.742 "nvme_io_md": false, 00:03:38.742 "write_zeroes": true, 00:03:38.742 "zcopy": true, 00:03:38.742 "get_zone_info": false, 00:03:38.742 "zone_management": false, 00:03:38.742 "zone_append": false, 00:03:38.742 "compare": false, 00:03:38.742 "compare_and_write": false, 00:03:38.742 "abort": true, 00:03:38.742 "seek_hole": false, 00:03:38.742 "seek_data": false, 00:03:38.742 "copy": true, 00:03:38.742 "nvme_iov_md": false 00:03:38.742 }, 00:03:38.742 "memory_domains": [ 00:03:38.742 { 00:03:38.742 "dma_device_id": "system", 00:03:38.742 "dma_device_type": 1 00:03:38.742 }, 00:03:38.742 { 00:03:38.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.742 "dma_device_type": 2 00:03:38.742 } 00:03:38.742 ], 00:03:38.742 "driver_specific": {} 00:03:38.742 } 00:03:38.742 ]' 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:38.742 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:39.004 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:39.004 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.004 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.004 [2024-11-20 08:49:04.273949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:39.004 [2024-11-20 08:49:04.273990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:39.004 [2024-11-20 08:49:04.274007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25cf8d0 00:03:39.004 [2024-11-20 08:49:04.274014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:39.004 [2024-11-20 08:49:04.275470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:39.004 [2024-11-20 08:49:04.275508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:39.004 Passthru0 00:03:39.004 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.004 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:39.004 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.004 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.004 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.004 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:39.004 { 00:03:39.004 "name": "Malloc2", 00:03:39.004 "aliases": [ 00:03:39.004 "053b85fc-7139-4dfa-98a5-69d6d50f917e" 00:03:39.004 ], 00:03:39.004 "product_name": "Malloc disk", 00:03:39.004 "block_size": 512, 00:03:39.004 "num_blocks": 16384, 00:03:39.004 "uuid": "053b85fc-7139-4dfa-98a5-69d6d50f917e", 00:03:39.004 "assigned_rate_limits": { 00:03:39.004 "rw_ios_per_sec": 0, 00:03:39.004 "rw_mbytes_per_sec": 0, 00:03:39.004 "r_mbytes_per_sec": 0, 00:03:39.004 "w_mbytes_per_sec": 0 00:03:39.004 }, 00:03:39.004 "claimed": true, 00:03:39.004 "claim_type": "exclusive_write", 00:03:39.004 "zoned": false, 00:03:39.004 "supported_io_types": { 00:03:39.004 "read": true, 00:03:39.004 "write": true, 00:03:39.004 "unmap": true, 00:03:39.004 "flush": true, 00:03:39.004 "reset": true, 00:03:39.004 "nvme_admin": false, 00:03:39.004 "nvme_io": false, 00:03:39.004 "nvme_io_md": false, 00:03:39.004 "write_zeroes": true, 00:03:39.004 "zcopy": true, 00:03:39.005 "get_zone_info": false, 00:03:39.005 "zone_management": false, 00:03:39.005 "zone_append": false, 00:03:39.005 "compare": false, 00:03:39.005 "compare_and_write": false, 00:03:39.005 "abort": true, 00:03:39.005 "seek_hole": false, 00:03:39.005 "seek_data": false, 00:03:39.005 "copy": true, 00:03:39.005 "nvme_iov_md": false 00:03:39.005 }, 00:03:39.005 "memory_domains": [ 00:03:39.005 { 00:03:39.005 "dma_device_id": "system", 00:03:39.005 "dma_device_type": 1 00:03:39.005 }, 00:03:39.005 { 00:03:39.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:39.005 "dma_device_type": 2 00:03:39.005 } 00:03:39.005 ], 00:03:39.005 "driver_specific": {} 00:03:39.005 }, 00:03:39.005 { 00:03:39.005 "name": "Passthru0", 00:03:39.005 "aliases": [ 00:03:39.005 "62ff680a-f415-595f-a5e9-c35352d465c0" 00:03:39.005 ], 00:03:39.005 "product_name": "passthru", 00:03:39.005 "block_size": 512, 00:03:39.005 "num_blocks": 16384, 00:03:39.005 "uuid": "62ff680a-f415-595f-a5e9-c35352d465c0", 00:03:39.005 "assigned_rate_limits": { 00:03:39.005 "rw_ios_per_sec": 0, 00:03:39.005 "rw_mbytes_per_sec": 0, 00:03:39.005 "r_mbytes_per_sec": 0, 00:03:39.005 "w_mbytes_per_sec": 0 00:03:39.005 }, 00:03:39.005 "claimed": false, 00:03:39.005 "zoned": false, 00:03:39.005 "supported_io_types": { 00:03:39.005 "read": true, 00:03:39.005 "write": true, 00:03:39.005 "unmap": true, 00:03:39.005 "flush": true, 00:03:39.005 "reset": true, 00:03:39.005 "nvme_admin": false, 00:03:39.005 "nvme_io": false, 00:03:39.005 "nvme_io_md": false, 00:03:39.005 "write_zeroes": true, 00:03:39.005 "zcopy": true, 00:03:39.005 "get_zone_info": false, 00:03:39.005 "zone_management": false, 00:03:39.005 "zone_append": false, 00:03:39.005 "compare": false, 00:03:39.005 "compare_and_write": false, 00:03:39.005 "abort": true, 00:03:39.005 "seek_hole": false, 00:03:39.005 "seek_data": false, 00:03:39.005 "copy": true, 00:03:39.005 "nvme_iov_md": false 00:03:39.005 }, 00:03:39.005 "memory_domains": [ 00:03:39.005 { 00:03:39.005 "dma_device_id": "system", 00:03:39.005 "dma_device_type": 1 00:03:39.005 }, 00:03:39.005 { 00:03:39.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:39.005 "dma_device_type": 2 00:03:39.005 } 00:03:39.005 ], 00:03:39.005 "driver_specific": { 00:03:39.005 "passthru": { 00:03:39.005 "name": "Passthru0", 00:03:39.005 "base_bdev_name": "Malloc2" 00:03:39.005 } 00:03:39.005 } 00:03:39.005 } 00:03:39.005 ]' 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:39.005 00:03:39.005 real 0m0.304s 00:03:39.005 user 0m0.191s 00:03:39.005 sys 0m0.043s 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.005 08:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.005 ************************************ 00:03:39.005 END TEST rpc_daemon_integrity 00:03:39.005 ************************************ 00:03:39.005 08:49:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:39.005 08:49:04 rpc -- rpc/rpc.sh@84 -- # killprocess 450168 00:03:39.005 08:49:04 rpc -- common/autotest_common.sh@954 -- # '[' -z 450168 ']' 00:03:39.005 08:49:04 rpc -- common/autotest_common.sh@958 -- # kill -0 450168 00:03:39.005 08:49:04 rpc -- common/autotest_common.sh@959 -- # uname 00:03:39.005 08:49:04 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:39.005 08:49:04 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 450168 00:03:39.266 08:49:04 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.266 08:49:04 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.266 08:49:04 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 450168' 00:03:39.266 killing process with pid 450168 00:03:39.266 08:49:04 rpc -- common/autotest_common.sh@973 -- # kill 450168 00:03:39.266 08:49:04 rpc -- common/autotest_common.sh@978 -- # wait 450168 00:03:39.266 00:03:39.266 real 0m2.692s 00:03:39.266 user 0m3.444s 00:03:39.266 sys 0m0.820s 00:03:39.266 08:49:04 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.266 08:49:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.266 ************************************ 00:03:39.266 END TEST rpc 00:03:39.266 ************************************ 00:03:39.526 08:49:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:39.526 08:49:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.526 08:49:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.527 08:49:04 -- common/autotest_common.sh@10 -- # set +x 00:03:39.527 ************************************ 00:03:39.527 START TEST skip_rpc 00:03:39.527 ************************************ 00:03:39.527 08:49:04 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:39.527 * Looking for test storage... 00:03:39.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:39.527 08:49:04 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:39.527 08:49:04 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:39.527 08:49:04 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:39.527 08:49:05 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.527 08:49:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.789 08:49:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:39.789 08:49:05 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.789 08:49:05 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:39.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.789 --rc genhtml_branch_coverage=1 00:03:39.789 --rc genhtml_function_coverage=1 00:03:39.789 --rc genhtml_legend=1 00:03:39.789 --rc geninfo_all_blocks=1 00:03:39.789 --rc geninfo_unexecuted_blocks=1 00:03:39.789 00:03:39.789 ' 00:03:39.789 08:49:05 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:39.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.789 --rc genhtml_branch_coverage=1 00:03:39.789 --rc genhtml_function_coverage=1 00:03:39.789 --rc genhtml_legend=1 00:03:39.789 --rc geninfo_all_blocks=1 00:03:39.789 --rc geninfo_unexecuted_blocks=1 00:03:39.789 00:03:39.789 ' 00:03:39.789 08:49:05 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:39.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.789 --rc genhtml_branch_coverage=1 00:03:39.789 --rc genhtml_function_coverage=1 00:03:39.789 --rc genhtml_legend=1 00:03:39.789 --rc geninfo_all_blocks=1 00:03:39.789 --rc geninfo_unexecuted_blocks=1 00:03:39.789 00:03:39.789 ' 00:03:39.789 08:49:05 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:39.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.789 --rc genhtml_branch_coverage=1 00:03:39.789 --rc genhtml_function_coverage=1 00:03:39.789 --rc genhtml_legend=1 00:03:39.789 --rc geninfo_all_blocks=1 00:03:39.789 --rc geninfo_unexecuted_blocks=1 00:03:39.789 00:03:39.789 ' 00:03:39.789 08:49:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.789 08:49:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:39.789 08:49:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:39.789 08:49:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.789 08:49:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.789 08:49:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.789 ************************************ 00:03:39.789 START TEST skip_rpc 00:03:39.789 ************************************ 00:03:39.789 08:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:39.789 08:49:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=451018 00:03:39.789 08:49:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.789 08:49:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:39.789 08:49:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:39.789 [2024-11-20 08:49:05.166410] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:03:39.789 [2024-11-20 08:49:05.166474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451018 ] 00:03:39.789 [2024-11-20 08:49:05.257396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.789 [2024-11-20 08:49:05.309251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 451018 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 451018 ']' 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 451018 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 451018 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 451018' 00:03:45.082 killing process with pid 451018 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 451018 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 451018 00:03:45.082 00:03:45.082 real 0m5.264s 00:03:45.082 user 0m5.013s 00:03:45.082 sys 0m0.295s 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.082 08:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.082 ************************************ 00:03:45.082 END TEST skip_rpc 00:03:45.082 ************************************ 00:03:45.082 08:49:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:45.082 08:49:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.082 08:49:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.082 08:49:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.082 ************************************ 00:03:45.082 START TEST skip_rpc_with_json 00:03:45.082 ************************************ 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=452057 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 452057 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 452057 ']' 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:45.082 08:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.082 [2024-11-20 08:49:10.506993] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:03:45.082 [2024-11-20 08:49:10.507050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452057 ] 00:03:45.082 [2024-11-20 08:49:10.593811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.343 [2024-11-20 08:49:10.628912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.914 [2024-11-20 08:49:11.297035] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:45.914 request: 00:03:45.914 { 00:03:45.914 "trtype": "tcp", 00:03:45.914 "method": "nvmf_get_transports", 00:03:45.914 "req_id": 1 00:03:45.914 } 00:03:45.914 Got JSON-RPC error response 00:03:45.914 response: 00:03:45.914 { 00:03:45.914 "code": -19, 00:03:45.914 "message": "No such device" 00:03:45.914 } 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.914 [2024-11-20 08:49:11.309130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.914 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.176 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.176 08:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:46.176 { 00:03:46.176 "subsystems": [ 00:03:46.176 { 00:03:46.176 "subsystem": "fsdev", 00:03:46.176 "config": [ 00:03:46.176 { 00:03:46.176 "method": "fsdev_set_opts", 00:03:46.176 "params": { 00:03:46.176 "fsdev_io_pool_size": 65535, 00:03:46.176 "fsdev_io_cache_size": 256 00:03:46.176 } 00:03:46.176 } 00:03:46.176 ] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "vfio_user_target", 00:03:46.176 "config": null 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "keyring", 00:03:46.176 "config": [] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "iobuf", 00:03:46.176 "config": [ 00:03:46.176 { 00:03:46.176 "method": "iobuf_set_options", 00:03:46.176 "params": { 00:03:46.176 "small_pool_count": 8192, 00:03:46.176 "large_pool_count": 1024, 00:03:46.176 "small_bufsize": 8192, 00:03:46.176 "large_bufsize": 135168, 00:03:46.176 "enable_numa": false 00:03:46.176 } 00:03:46.176 } 00:03:46.176 ] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "sock", 00:03:46.176 "config": [ 00:03:46.176 { 00:03:46.176 "method": "sock_set_default_impl", 00:03:46.176 "params": { 00:03:46.176 "impl_name": "posix" 00:03:46.176 } 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "method": "sock_impl_set_options", 00:03:46.176 "params": { 00:03:46.176 "impl_name": "ssl", 00:03:46.176 "recv_buf_size": 4096, 00:03:46.176 "send_buf_size": 4096, 00:03:46.176 "enable_recv_pipe": true, 00:03:46.176 "enable_quickack": false, 00:03:46.176 "enable_placement_id": 0, 00:03:46.176 "enable_zerocopy_send_server": true, 00:03:46.176 "enable_zerocopy_send_client": false, 00:03:46.176 "zerocopy_threshold": 0, 00:03:46.176 "tls_version": 0, 00:03:46.176 "enable_ktls": false 00:03:46.176 } 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "method": "sock_impl_set_options", 00:03:46.176 "params": { 00:03:46.176 "impl_name": "posix", 00:03:46.176 "recv_buf_size": 2097152, 00:03:46.176 "send_buf_size": 2097152, 00:03:46.176 "enable_recv_pipe": true, 00:03:46.176 "enable_quickack": false, 00:03:46.176 "enable_placement_id": 0, 00:03:46.176 "enable_zerocopy_send_server": true, 00:03:46.176 "enable_zerocopy_send_client": false, 00:03:46.176 "zerocopy_threshold": 0, 00:03:46.176 "tls_version": 0, 00:03:46.176 "enable_ktls": false 00:03:46.176 } 00:03:46.176 } 00:03:46.176 ] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "vmd", 00:03:46.176 "config": [] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "accel", 00:03:46.176 "config": [ 00:03:46.176 { 00:03:46.176 "method": "accel_set_options", 00:03:46.176 "params": { 00:03:46.176 "small_cache_size": 128, 00:03:46.176 "large_cache_size": 16, 00:03:46.176 "task_count": 2048, 00:03:46.176 "sequence_count": 2048, 00:03:46.176 "buf_count": 2048 00:03:46.176 } 00:03:46.176 } 00:03:46.176 ] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "bdev", 00:03:46.176 "config": [ 00:03:46.176 { 00:03:46.176 "method": "bdev_set_options", 00:03:46.176 "params": { 00:03:46.176 "bdev_io_pool_size": 65535, 00:03:46.176 "bdev_io_cache_size": 256, 00:03:46.176 "bdev_auto_examine": true, 00:03:46.176 "iobuf_small_cache_size": 128, 00:03:46.176 "iobuf_large_cache_size": 16 00:03:46.176 } 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "method": "bdev_raid_set_options", 00:03:46.176 "params": { 00:03:46.176 "process_window_size_kb": 1024, 00:03:46.176 "process_max_bandwidth_mb_sec": 0 00:03:46.176 } 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "method": "bdev_iscsi_set_options", 00:03:46.176 "params": { 00:03:46.176 "timeout_sec": 30 00:03:46.176 } 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "method": "bdev_nvme_set_options", 00:03:46.176 "params": { 00:03:46.176 "action_on_timeout": "none", 00:03:46.176 "timeout_us": 0, 00:03:46.176 "timeout_admin_us": 0, 00:03:46.176 "keep_alive_timeout_ms": 10000, 00:03:46.176 "arbitration_burst": 0, 00:03:46.176 "low_priority_weight": 0, 00:03:46.176 "medium_priority_weight": 0, 00:03:46.176 "high_priority_weight": 0, 00:03:46.176 "nvme_adminq_poll_period_us": 10000, 00:03:46.176 "nvme_ioq_poll_period_us": 0, 00:03:46.176 "io_queue_requests": 0, 00:03:46.176 "delay_cmd_submit": true, 00:03:46.176 "transport_retry_count": 4, 00:03:46.176 "bdev_retry_count": 3, 00:03:46.176 "transport_ack_timeout": 0, 00:03:46.176 "ctrlr_loss_timeout_sec": 0, 00:03:46.176 "reconnect_delay_sec": 0, 00:03:46.176 "fast_io_fail_timeout_sec": 0, 00:03:46.176 "disable_auto_failback": false, 00:03:46.176 "generate_uuids": false, 00:03:46.176 "transport_tos": 0, 00:03:46.176 "nvme_error_stat": false, 00:03:46.176 "rdma_srq_size": 0, 00:03:46.176 "io_path_stat": false, 00:03:46.176 "allow_accel_sequence": false, 00:03:46.176 "rdma_max_cq_size": 0, 00:03:46.176 "rdma_cm_event_timeout_ms": 0, 00:03:46.176 "dhchap_digests": [ 00:03:46.176 "sha256", 00:03:46.176 "sha384", 00:03:46.176 "sha512" 00:03:46.176 ], 00:03:46.176 "dhchap_dhgroups": [ 00:03:46.176 "null", 00:03:46.176 "ffdhe2048", 00:03:46.176 "ffdhe3072", 00:03:46.176 "ffdhe4096", 00:03:46.176 "ffdhe6144", 00:03:46.176 "ffdhe8192" 00:03:46.176 ] 00:03:46.176 } 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "method": "bdev_nvme_set_hotplug", 00:03:46.176 "params": { 00:03:46.176 "period_us": 100000, 00:03:46.176 "enable": false 00:03:46.176 } 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "method": "bdev_wait_for_examine" 00:03:46.176 } 00:03:46.176 ] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "scsi", 00:03:46.176 "config": null 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "scheduler", 00:03:46.176 "config": [ 00:03:46.176 { 00:03:46.176 "method": "framework_set_scheduler", 00:03:46.176 "params": { 00:03:46.176 "name": "static" 00:03:46.176 } 00:03:46.176 } 00:03:46.176 ] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "vhost_scsi", 00:03:46.176 "config": [] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "vhost_blk", 00:03:46.176 "config": [] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "ublk", 00:03:46.176 "config": [] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "nbd", 00:03:46.176 "config": [] 00:03:46.176 }, 00:03:46.176 { 00:03:46.176 "subsystem": "nvmf", 00:03:46.177 "config": [ 00:03:46.177 { 00:03:46.177 "method": "nvmf_set_config", 00:03:46.177 "params": { 00:03:46.177 "discovery_filter": "match_any", 00:03:46.177 "admin_cmd_passthru": { 00:03:46.177 "identify_ctrlr": false 00:03:46.177 }, 00:03:46.177 "dhchap_digests": [ 00:03:46.177 "sha256", 00:03:46.177 "sha384", 00:03:46.177 "sha512" 00:03:46.177 ], 00:03:46.177 "dhchap_dhgroups": [ 00:03:46.177 "null", 00:03:46.177 "ffdhe2048", 00:03:46.177 "ffdhe3072", 00:03:46.177 "ffdhe4096", 00:03:46.177 "ffdhe6144", 00:03:46.177 "ffdhe8192" 00:03:46.177 ] 00:03:46.177 } 00:03:46.177 }, 00:03:46.177 { 00:03:46.177 "method": "nvmf_set_max_subsystems", 00:03:46.177 "params": { 00:03:46.177 "max_subsystems": 1024 00:03:46.177 } 00:03:46.177 }, 00:03:46.177 { 00:03:46.177 "method": "nvmf_set_crdt", 00:03:46.177 "params": { 00:03:46.177 "crdt1": 0, 00:03:46.177 "crdt2": 0, 00:03:46.177 "crdt3": 0 00:03:46.177 } 00:03:46.177 }, 00:03:46.177 { 00:03:46.177 "method": "nvmf_create_transport", 00:03:46.177 "params": { 00:03:46.177 "trtype": "TCP", 00:03:46.177 "max_queue_depth": 128, 00:03:46.177 "max_io_qpairs_per_ctrlr": 127, 00:03:46.177 "in_capsule_data_size": 4096, 00:03:46.177 "max_io_size": 131072, 00:03:46.177 "io_unit_size": 131072, 00:03:46.177 "max_aq_depth": 128, 00:03:46.177 "num_shared_buffers": 511, 00:03:46.177 "buf_cache_size": 4294967295, 00:03:46.177 "dif_insert_or_strip": false, 00:03:46.177 "zcopy": false, 00:03:46.177 "c2h_success": true, 00:03:46.177 "sock_priority": 0, 00:03:46.177 "abort_timeout_sec": 1, 00:03:46.177 "ack_timeout": 0, 00:03:46.177 "data_wr_pool_size": 0 00:03:46.177 } 00:03:46.177 } 00:03:46.177 ] 00:03:46.177 }, 00:03:46.177 { 00:03:46.177 "subsystem": "iscsi", 00:03:46.177 "config": [ 00:03:46.177 { 00:03:46.177 "method": "iscsi_set_options", 00:03:46.177 "params": { 00:03:46.177 "node_base": "iqn.2016-06.io.spdk", 00:03:46.177 "max_sessions": 128, 00:03:46.177 "max_connections_per_session": 2, 00:03:46.177 "max_queue_depth": 64, 00:03:46.177 "default_time2wait": 2, 00:03:46.177 "default_time2retain": 20, 00:03:46.177 "first_burst_length": 8192, 00:03:46.177 "immediate_data": true, 00:03:46.177 "allow_duplicated_isid": false, 00:03:46.177 "error_recovery_level": 0, 00:03:46.177 "nop_timeout": 60, 00:03:46.177 "nop_in_interval": 30, 00:03:46.177 "disable_chap": false, 00:03:46.177 "require_chap": false, 00:03:46.177 "mutual_chap": false, 00:03:46.177 "chap_group": 0, 00:03:46.177 "max_large_datain_per_connection": 64, 00:03:46.177 "max_r2t_per_connection": 4, 00:03:46.177 "pdu_pool_size": 36864, 00:03:46.177 "immediate_data_pool_size": 16384, 00:03:46.177 "data_out_pool_size": 2048 00:03:46.177 } 00:03:46.177 } 00:03:46.177 ] 00:03:46.177 } 00:03:46.177 ] 00:03:46.177 } 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 452057 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 452057 ']' 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 452057 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 452057 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 452057' 00:03:46.177 killing process with pid 452057 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 452057 00:03:46.177 08:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 452057 00:03:46.438 08:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=452397 00:03:46.438 08:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:46.438 08:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 452397 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 452397 ']' 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 452397 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 452397 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 452397' 00:03:51.728 killing process with pid 452397 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 452397 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 452397 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:51.728 00:03:51.728 real 0m6.551s 00:03:51.728 user 0m6.462s 00:03:51.728 sys 0m0.556s 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.728 08:49:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:51.728 ************************************ 00:03:51.728 END TEST skip_rpc_with_json 00:03:51.728 ************************************ 00:03:51.728 08:49:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:51.728 08:49:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.728 08:49:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.728 08:49:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.728 ************************************ 00:03:51.728 START TEST skip_rpc_with_delay 00:03:51.728 ************************************ 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:51.728 [2024-11-20 08:49:17.134968] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:51.728 00:03:51.728 real 0m0.076s 00:03:51.728 user 0m0.050s 00:03:51.728 sys 0m0.025s 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.728 08:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:51.728 ************************************ 00:03:51.728 END TEST skip_rpc_with_delay 00:03:51.728 ************************************ 00:03:51.728 08:49:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:51.728 08:49:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:51.728 08:49:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:51.728 08:49:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.728 08:49:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.728 08:49:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.728 ************************************ 00:03:51.728 START TEST exit_on_failed_rpc_init 00:03:51.728 ************************************ 00:03:51.728 08:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:51.728 08:49:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=453458 00:03:51.728 08:49:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 453458 00:03:51.728 08:49:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:51.728 08:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 453458 ']' 00:03:51.728 08:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.728 08:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:51.728 08:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.728 08:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:51.728 08:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:51.990 [2024-11-20 08:49:17.291645] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:03:51.990 [2024-11-20 08:49:17.291697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453458 ] 00:03:51.990 [2024-11-20 08:49:17.375797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.990 [2024-11-20 08:49:17.407439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.562 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:52.562 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:52.562 08:49:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.562 08:49:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:52.562 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:52.562 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:52.562 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:52.824 [2024-11-20 08:49:18.150801] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:03:52.824 [2024-11-20 08:49:18.150853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453690 ] 00:03:52.824 [2024-11-20 08:49:18.239445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.824 [2024-11-20 08:49:18.275452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:52.824 [2024-11-20 08:49:18.275505] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:52.824 [2024-11-20 08:49:18.275515] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:52.824 [2024-11-20 08:49:18.275522] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 453458 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 453458 ']' 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 453458 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:52.824 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453458 00:03:53.085 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:53.085 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:53.085 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453458' 00:03:53.085 killing process with pid 453458 00:03:53.085 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 453458 00:03:53.085 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 453458 00:03:53.085 00:03:53.085 real 0m1.331s 00:03:53.085 user 0m1.556s 00:03:53.085 sys 0m0.398s 00:03:53.085 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.085 08:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:53.085 ************************************ 00:03:53.085 END TEST exit_on_failed_rpc_init 00:03:53.085 ************************************ 00:03:53.085 08:49:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:53.085 00:03:53.085 real 0m13.744s 00:03:53.086 user 0m13.310s 00:03:53.086 sys 0m1.596s 00:03:53.086 08:49:18 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.086 08:49:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.086 ************************************ 00:03:53.086 END TEST skip_rpc 00:03:53.086 ************************************ 00:03:53.346 08:49:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:53.347 08:49:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.347 08:49:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.347 08:49:18 -- common/autotest_common.sh@10 -- # set +x 00:03:53.347 ************************************ 00:03:53.347 START TEST rpc_client 00:03:53.347 ************************************ 00:03:53.347 08:49:18 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:53.347 * Looking for test storage... 00:03:53.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:53.347 08:49:18 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:53.347 08:49:18 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:53.347 08:49:18 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:53.347 08:49:18 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:53.347 08:49:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.607 08:49:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:53.607 08:49:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:53.607 08:49:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.608 08:49:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:53.608 08:49:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.608 08:49:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.608 08:49:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.608 08:49:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:53.608 08:49:18 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.608 08:49:18 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.608 --rc genhtml_branch_coverage=1 00:03:53.608 --rc genhtml_function_coverage=1 00:03:53.608 --rc genhtml_legend=1 00:03:53.608 --rc geninfo_all_blocks=1 00:03:53.608 --rc geninfo_unexecuted_blocks=1 00:03:53.608 00:03:53.608 ' 00:03:53.608 08:49:18 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.608 --rc genhtml_branch_coverage=1 00:03:53.608 --rc genhtml_function_coverage=1 00:03:53.608 --rc genhtml_legend=1 00:03:53.608 --rc geninfo_all_blocks=1 00:03:53.608 --rc geninfo_unexecuted_blocks=1 00:03:53.608 00:03:53.608 ' 00:03:53.608 08:49:18 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.608 --rc genhtml_branch_coverage=1 00:03:53.608 --rc genhtml_function_coverage=1 00:03:53.608 --rc genhtml_legend=1 00:03:53.608 --rc geninfo_all_blocks=1 00:03:53.608 --rc geninfo_unexecuted_blocks=1 00:03:53.608 00:03:53.608 ' 00:03:53.608 08:49:18 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.608 --rc genhtml_branch_coverage=1 00:03:53.608 --rc genhtml_function_coverage=1 00:03:53.608 --rc genhtml_legend=1 00:03:53.608 --rc geninfo_all_blocks=1 00:03:53.608 --rc geninfo_unexecuted_blocks=1 00:03:53.608 00:03:53.608 ' 00:03:53.608 08:49:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:53.608 OK 00:03:53.608 08:49:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:53.608 00:03:53.608 real 0m0.222s 00:03:53.608 user 0m0.133s 00:03:53.608 sys 0m0.103s 00:03:53.608 08:49:18 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.608 08:49:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:53.608 ************************************ 00:03:53.608 END TEST rpc_client 00:03:53.608 ************************************ 00:03:53.608 08:49:18 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:53.608 08:49:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.608 08:49:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.608 08:49:18 -- common/autotest_common.sh@10 -- # set +x 00:03:53.608 ************************************ 00:03:53.608 START TEST json_config 00:03:53.608 ************************************ 00:03:53.608 08:49:18 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:53.608 08:49:19 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:53.608 08:49:19 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:53.608 08:49:19 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:53.869 08:49:19 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:53.869 08:49:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.869 08:49:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.869 08:49:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.869 08:49:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.869 08:49:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.869 08:49:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.869 08:49:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.869 08:49:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.869 08:49:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.869 08:49:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.869 08:49:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.869 08:49:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:53.869 08:49:19 json_config -- scripts/common.sh@345 -- # : 1 00:03:53.869 08:49:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.869 08:49:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.869 08:49:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:53.869 08:49:19 json_config -- scripts/common.sh@353 -- # local d=1 00:03:53.869 08:49:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.869 08:49:19 json_config -- scripts/common.sh@355 -- # echo 1 00:03:53.869 08:49:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.869 08:49:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:53.869 08:49:19 json_config -- scripts/common.sh@353 -- # local d=2 00:03:53.869 08:49:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.869 08:49:19 json_config -- scripts/common.sh@355 -- # echo 2 00:03:53.869 08:49:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.869 08:49:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.869 08:49:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.869 08:49:19 json_config -- scripts/common.sh@368 -- # return 0 00:03:53.869 08:49:19 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.869 08:49:19 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:53.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.869 --rc genhtml_branch_coverage=1 00:03:53.869 --rc genhtml_function_coverage=1 00:03:53.869 --rc genhtml_legend=1 00:03:53.869 --rc geninfo_all_blocks=1 00:03:53.869 --rc geninfo_unexecuted_blocks=1 00:03:53.869 00:03:53.869 ' 00:03:53.869 08:49:19 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:53.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.869 --rc genhtml_branch_coverage=1 00:03:53.869 --rc genhtml_function_coverage=1 00:03:53.869 --rc genhtml_legend=1 00:03:53.869 --rc geninfo_all_blocks=1 00:03:53.869 --rc geninfo_unexecuted_blocks=1 00:03:53.869 00:03:53.869 ' 00:03:53.869 08:49:19 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:53.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.869 --rc genhtml_branch_coverage=1 00:03:53.869 --rc genhtml_function_coverage=1 00:03:53.869 --rc genhtml_legend=1 00:03:53.869 --rc geninfo_all_blocks=1 00:03:53.869 --rc geninfo_unexecuted_blocks=1 00:03:53.869 00:03:53.869 ' 00:03:53.869 08:49:19 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:53.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.869 --rc genhtml_branch_coverage=1 00:03:53.869 --rc genhtml_function_coverage=1 00:03:53.869 --rc genhtml_legend=1 00:03:53.869 --rc geninfo_all_blocks=1 00:03:53.869 --rc geninfo_unexecuted_blocks=1 00:03:53.869 00:03:53.869 ' 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:53.869 08:49:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:53.869 08:49:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:53.869 08:49:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:53.869 08:49:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:53.869 08:49:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.869 08:49:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.869 08:49:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.869 08:49:19 json_config -- paths/export.sh@5 -- # export PATH 00:03:53.869 08:49:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@51 -- # : 0 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:53.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:53.869 08:49:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:53.869 08:49:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:53.870 08:49:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:53.870 08:49:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:53.870 08:49:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:53.870 08:49:19 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:53.870 08:49:19 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:53.870 INFO: JSON configuration test init 00:03:53.870 08:49:19 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:53.870 08:49:19 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:53.870 08:49:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.870 08:49:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.870 08:49:19 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:53.870 08:49:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.870 08:49:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.870 08:49:19 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:53.870 08:49:19 json_config -- json_config/common.sh@9 -- # local app=target 00:03:53.870 08:49:19 json_config -- json_config/common.sh@10 -- # shift 00:03:53.870 08:49:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:53.870 08:49:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:53.870 08:49:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:53.870 08:49:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.870 08:49:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.870 08:49:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=453934 00:03:53.870 08:49:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:53.870 Waiting for target to run... 00:03:53.870 08:49:19 json_config -- json_config/common.sh@25 -- # waitforlisten 453934 /var/tmp/spdk_tgt.sock 00:03:53.870 08:49:19 json_config -- common/autotest_common.sh@835 -- # '[' -z 453934 ']' 00:03:53.870 08:49:19 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:53.870 08:49:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:53.870 08:49:19 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:53.870 08:49:19 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:53.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:53.870 08:49:19 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:53.870 08:49:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.870 [2024-11-20 08:49:19.271245] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:03:53.870 [2024-11-20 08:49:19.271315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453934 ] 00:03:54.130 [2024-11-20 08:49:19.576052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.130 [2024-11-20 08:49:19.600209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.700 08:49:20 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:54.700 08:49:20 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:54.700 08:49:20 json_config -- json_config/common.sh@26 -- # echo '' 00:03:54.700 00:03:54.700 08:49:20 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:54.700 08:49:20 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:54.700 08:49:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.700 08:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.700 08:49:20 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:54.700 08:49:20 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:54.700 08:49:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:54.700 08:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.700 08:49:20 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:54.700 08:49:20 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:54.700 08:49:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:55.271 08:49:20 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:55.271 08:49:20 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:55.271 08:49:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:55.271 08:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.271 08:49:20 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:55.271 08:49:20 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:55.271 08:49:20 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:55.271 08:49:20 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:55.271 08:49:20 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:55.271 08:49:20 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:55.271 08:49:20 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:55.271 08:49:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@54 -- # sort 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:55.532 08:49:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.532 08:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:55.532 08:49:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:55.532 08:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:55.532 08:49:20 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:55.532 08:49:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:55.532 MallocForNvmf0 00:03:55.793 08:49:21 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:55.793 08:49:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:55.793 MallocForNvmf1 00:03:55.793 08:49:21 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:55.793 08:49:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:56.052 [2024-11-20 08:49:21.369593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:56.052 08:49:21 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:56.052 08:49:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:56.052 08:49:21 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:56.052 08:49:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:56.313 08:49:21 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:56.313 08:49:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:56.572 08:49:21 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:56.572 08:49:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:56.572 [2024-11-20 08:49:22.007498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:56.572 08:49:22 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:56.572 08:49:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.572 08:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.572 08:49:22 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:56.572 08:49:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.572 08:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.572 08:49:22 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:56.572 08:49:22 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:56.572 08:49:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:56.833 MallocBdevForConfigChangeCheck 00:03:56.833 08:49:22 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:56.833 08:49:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.833 08:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.833 08:49:22 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:56.833 08:49:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:57.403 08:49:22 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:57.403 INFO: shutting down applications... 00:03:57.403 08:49:22 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:57.403 08:49:22 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:57.403 08:49:22 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:57.403 08:49:22 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:57.663 Calling clear_iscsi_subsystem 00:03:57.663 Calling clear_nvmf_subsystem 00:03:57.663 Calling clear_nbd_subsystem 00:03:57.663 Calling clear_ublk_subsystem 00:03:57.663 Calling clear_vhost_blk_subsystem 00:03:57.663 Calling clear_vhost_scsi_subsystem 00:03:57.663 Calling clear_bdev_subsystem 00:03:57.663 08:49:23 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:57.663 08:49:23 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:57.663 08:49:23 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:57.663 08:49:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:57.663 08:49:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:57.663 08:49:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:57.923 08:49:23 json_config -- json_config/json_config.sh@352 -- # break 00:03:57.923 08:49:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:57.923 08:49:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:57.923 08:49:23 json_config -- json_config/common.sh@31 -- # local app=target 00:03:57.923 08:49:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:57.923 08:49:23 json_config -- json_config/common.sh@35 -- # [[ -n 453934 ]] 00:03:57.923 08:49:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 453934 00:03:57.924 08:49:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:57.924 08:49:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:57.924 08:49:23 json_config -- json_config/common.sh@41 -- # kill -0 453934 00:03:57.924 08:49:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:58.494 08:49:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:58.494 08:49:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.494 08:49:23 json_config -- json_config/common.sh@41 -- # kill -0 453934 00:03:58.494 08:49:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:58.494 08:49:23 json_config -- json_config/common.sh@43 -- # break 00:03:58.494 08:49:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:58.494 08:49:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:58.494 SPDK target shutdown done 00:03:58.494 08:49:23 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:58.494 INFO: relaunching applications... 00:03:58.494 08:49:23 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.494 08:49:23 json_config -- json_config/common.sh@9 -- # local app=target 00:03:58.494 08:49:23 json_config -- json_config/common.sh@10 -- # shift 00:03:58.494 08:49:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.494 08:49:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.495 08:49:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.495 08:49:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.495 08:49:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.495 08:49:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=455071 00:03:58.495 08:49:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.495 Waiting for target to run... 00:03:58.495 08:49:23 json_config -- json_config/common.sh@25 -- # waitforlisten 455071 /var/tmp/spdk_tgt.sock 00:03:58.495 08:49:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.495 08:49:23 json_config -- common/autotest_common.sh@835 -- # '[' -z 455071 ']' 00:03:58.495 08:49:23 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.495 08:49:23 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:58.495 08:49:23 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.495 08:49:23 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:58.495 08:49:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.495 [2024-11-20 08:49:23.999044] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:03:58.495 [2024-11-20 08:49:23.999101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455071 ] 00:03:59.065 [2024-11-20 08:49:24.298127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.065 [2024-11-20 08:49:24.322964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.325 [2024-11-20 08:49:24.819956] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:59.325 [2024-11-20 08:49:24.852330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:59.586 08:49:24 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:59.586 08:49:24 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:59.586 08:49:24 json_config -- json_config/common.sh@26 -- # echo '' 00:03:59.586 00:03:59.586 08:49:24 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:59.586 08:49:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:59.586 INFO: Checking if target configuration is the same... 00:03:59.586 08:49:24 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.586 08:49:24 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:59.586 08:49:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:59.586 + '[' 2 -ne 2 ']' 00:03:59.586 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:59.586 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:59.586 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:59.586 +++ basename /dev/fd/62 00:03:59.586 ++ mktemp /tmp/62.XXX 00:03:59.586 + tmp_file_1=/tmp/62.FjN 00:03:59.586 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.586 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:59.586 + tmp_file_2=/tmp/spdk_tgt_config.json.Y7S 00:03:59.586 + ret=0 00:03:59.586 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:59.846 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:59.846 + diff -u /tmp/62.FjN /tmp/spdk_tgt_config.json.Y7S 00:03:59.846 + echo 'INFO: JSON config files are the same' 00:03:59.846 INFO: JSON config files are the same 00:03:59.846 + rm /tmp/62.FjN /tmp/spdk_tgt_config.json.Y7S 00:03:59.846 + exit 0 00:03:59.846 08:49:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:59.846 08:49:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:59.846 INFO: changing configuration and checking if this can be detected... 00:03:59.846 08:49:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:59.846 08:49:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:00.107 08:49:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:00.107 08:49:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:00.107 08:49:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.107 + '[' 2 -ne 2 ']' 00:04:00.107 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:00.107 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:00.107 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.107 +++ basename /dev/fd/62 00:04:00.107 ++ mktemp /tmp/62.XXX 00:04:00.107 + tmp_file_1=/tmp/62.Kty 00:04:00.107 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.107 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:00.107 + tmp_file_2=/tmp/spdk_tgt_config.json.I9R 00:04:00.107 + ret=0 00:04:00.107 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:00.369 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:00.369 + diff -u /tmp/62.Kty /tmp/spdk_tgt_config.json.I9R 00:04:00.369 + ret=1 00:04:00.369 + echo '=== Start of file: /tmp/62.Kty ===' 00:04:00.369 + cat /tmp/62.Kty 00:04:00.369 + echo '=== End of file: /tmp/62.Kty ===' 00:04:00.369 + echo '' 00:04:00.369 + echo '=== Start of file: /tmp/spdk_tgt_config.json.I9R ===' 00:04:00.369 + cat /tmp/spdk_tgt_config.json.I9R 00:04:00.369 + echo '=== End of file: /tmp/spdk_tgt_config.json.I9R ===' 00:04:00.369 + echo '' 00:04:00.369 + rm /tmp/62.Kty /tmp/spdk_tgt_config.json.I9R 00:04:00.369 + exit 1 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:00.369 INFO: configuration change detected. 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:00.369 08:49:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.369 08:49:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 455071 ]] 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:00.369 08:49:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.369 08:49:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:00.369 08:49:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:00.369 08:49:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.369 08:49:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.630 08:49:25 json_config -- json_config/json_config.sh@330 -- # killprocess 455071 00:04:00.630 08:49:25 json_config -- common/autotest_common.sh@954 -- # '[' -z 455071 ']' 00:04:00.630 08:49:25 json_config -- common/autotest_common.sh@958 -- # kill -0 455071 00:04:00.630 08:49:25 json_config -- common/autotest_common.sh@959 -- # uname 00:04:00.630 08:49:25 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.630 08:49:25 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 455071 00:04:00.630 08:49:25 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.630 08:49:25 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.630 08:49:25 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 455071' 00:04:00.630 killing process with pid 455071 00:04:00.630 08:49:25 json_config -- common/autotest_common.sh@973 -- # kill 455071 00:04:00.630 08:49:25 json_config -- common/autotest_common.sh@978 -- # wait 455071 00:04:00.891 08:49:26 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.891 08:49:26 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:00.891 08:49:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.891 08:49:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.891 08:49:26 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:00.891 08:49:26 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:00.891 INFO: Success 00:04:00.891 00:04:00.891 real 0m7.304s 00:04:00.891 user 0m8.744s 00:04:00.891 sys 0m1.981s 00:04:00.891 08:49:26 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.891 08:49:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.891 ************************************ 00:04:00.891 END TEST json_config 00:04:00.891 ************************************ 00:04:00.891 08:49:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:00.891 08:49:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.891 08:49:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.891 08:49:26 -- common/autotest_common.sh@10 -- # set +x 00:04:00.891 ************************************ 00:04:00.891 START TEST json_config_extra_key 00:04:00.891 ************************************ 00:04:00.891 08:49:26 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:01.152 08:49:26 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:01.152 08:49:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:01.152 08:49:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:01.152 08:49:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.152 08:49:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:01.152 08:49:26 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.152 08:49:26 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:01.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.152 --rc genhtml_branch_coverage=1 00:04:01.152 --rc genhtml_function_coverage=1 00:04:01.152 --rc genhtml_legend=1 00:04:01.152 --rc geninfo_all_blocks=1 00:04:01.152 --rc geninfo_unexecuted_blocks=1 00:04:01.152 00:04:01.152 ' 00:04:01.152 08:49:26 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:01.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.152 --rc genhtml_branch_coverage=1 00:04:01.152 --rc genhtml_function_coverage=1 00:04:01.152 --rc genhtml_legend=1 00:04:01.152 --rc geninfo_all_blocks=1 00:04:01.152 --rc geninfo_unexecuted_blocks=1 00:04:01.152 00:04:01.152 ' 00:04:01.152 08:49:26 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:01.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.152 --rc genhtml_branch_coverage=1 00:04:01.152 --rc genhtml_function_coverage=1 00:04:01.152 --rc genhtml_legend=1 00:04:01.152 --rc geninfo_all_blocks=1 00:04:01.152 --rc geninfo_unexecuted_blocks=1 00:04:01.152 00:04:01.152 ' 00:04:01.152 08:49:26 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:01.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.152 --rc genhtml_branch_coverage=1 00:04:01.152 --rc genhtml_function_coverage=1 00:04:01.152 --rc genhtml_legend=1 00:04:01.152 --rc geninfo_all_blocks=1 00:04:01.152 --rc geninfo_unexecuted_blocks=1 00:04:01.152 00:04:01.152 ' 00:04:01.152 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:01.152 08:49:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:01.152 08:49:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:01.152 08:49:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:01.152 08:49:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:01.152 08:49:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:01.153 08:49:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:01.153 08:49:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:01.153 08:49:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:01.153 08:49:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:01.153 08:49:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.153 08:49:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.153 08:49:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.153 08:49:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:01.153 08:49:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:01.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:01.153 08:49:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:01.153 INFO: launching applications... 00:04:01.153 08:49:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:01.153 08:49:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:01.153 08:49:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:01.153 08:49:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:01.153 08:49:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:01.153 08:49:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:01.153 08:49:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.153 08:49:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.153 08:49:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=455761 00:04:01.153 08:49:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:01.153 Waiting for target to run... 00:04:01.153 08:49:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 455761 /var/tmp/spdk_tgt.sock 00:04:01.153 08:49:26 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 455761 ']' 00:04:01.153 08:49:26 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:01.153 08:49:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:01.153 08:49:26 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.153 08:49:26 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:01.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:01.153 08:49:26 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.153 08:49:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:01.153 [2024-11-20 08:49:26.636192] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:01.153 [2024-11-20 08:49:26.636265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455761 ] 00:04:01.724 [2024-11-20 08:49:26.985359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.724 [2024-11-20 08:49:27.009344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.984 08:49:27 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.984 08:49:27 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:01.984 08:49:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:01.984 00:04:01.984 08:49:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:01.984 INFO: shutting down applications... 00:04:01.984 08:49:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:01.984 08:49:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:01.984 08:49:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:01.984 08:49:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 455761 ]] 00:04:01.984 08:49:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 455761 00:04:01.984 08:49:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:01.984 08:49:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.984 08:49:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 455761 00:04:01.984 08:49:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:02.552 08:49:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:02.552 08:49:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:02.552 08:49:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 455761 00:04:02.552 08:49:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:02.552 08:49:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:02.552 08:49:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:02.552 08:49:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:02.552 SPDK target shutdown done 00:04:02.552 08:49:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:02.552 Success 00:04:02.552 00:04:02.552 real 0m1.581s 00:04:02.552 user 0m1.151s 00:04:02.552 sys 0m0.468s 00:04:02.552 08:49:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.552 08:49:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:02.552 ************************************ 00:04:02.552 END TEST json_config_extra_key 00:04:02.552 ************************************ 00:04:02.552 08:49:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.552 08:49:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.552 08:49:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.552 08:49:27 -- common/autotest_common.sh@10 -- # set +x 00:04:02.552 ************************************ 00:04:02.552 START TEST alias_rpc 00:04:02.552 ************************************ 00:04:02.552 08:49:28 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.812 * Looking for test storage... 00:04:02.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:02.812 08:49:28 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.812 08:49:28 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.812 08:49:28 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.812 08:49:28 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.812 08:49:28 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.813 08:49:28 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:02.813 08:49:28 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.813 08:49:28 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.813 --rc genhtml_branch_coverage=1 00:04:02.813 --rc genhtml_function_coverage=1 00:04:02.813 --rc genhtml_legend=1 00:04:02.813 --rc geninfo_all_blocks=1 00:04:02.813 --rc geninfo_unexecuted_blocks=1 00:04:02.813 00:04:02.813 ' 00:04:02.813 08:49:28 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.813 --rc genhtml_branch_coverage=1 00:04:02.813 --rc genhtml_function_coverage=1 00:04:02.813 --rc genhtml_legend=1 00:04:02.813 --rc geninfo_all_blocks=1 00:04:02.813 --rc geninfo_unexecuted_blocks=1 00:04:02.813 00:04:02.813 ' 00:04:02.813 08:49:28 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.813 --rc genhtml_branch_coverage=1 00:04:02.813 --rc genhtml_function_coverage=1 00:04:02.813 --rc genhtml_legend=1 00:04:02.813 --rc geninfo_all_blocks=1 00:04:02.813 --rc geninfo_unexecuted_blocks=1 00:04:02.813 00:04:02.813 ' 00:04:02.813 08:49:28 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.813 --rc genhtml_branch_coverage=1 00:04:02.813 --rc genhtml_function_coverage=1 00:04:02.813 --rc genhtml_legend=1 00:04:02.813 --rc geninfo_all_blocks=1 00:04:02.813 --rc geninfo_unexecuted_blocks=1 00:04:02.813 00:04:02.813 ' 00:04:02.813 08:49:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:02.813 08:49:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=456131 00:04:02.813 08:49:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 456131 00:04:02.813 08:49:28 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 456131 ']' 00:04:02.813 08:49:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.813 08:49:28 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.813 08:49:28 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.813 08:49:28 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.813 08:49:28 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.813 08:49:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.813 [2024-11-20 08:49:28.294670] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:02.813 [2024-11-20 08:49:28.294747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456131 ] 00:04:03.073 [2024-11-20 08:49:28.382174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.073 [2024-11-20 08:49:28.417457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.642 08:49:29 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.642 08:49:29 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:03.642 08:49:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:03.901 08:49:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 456131 00:04:03.902 08:49:29 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 456131 ']' 00:04:03.902 08:49:29 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 456131 00:04:03.902 08:49:29 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:03.902 08:49:29 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.902 08:49:29 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 456131 00:04:03.902 08:49:29 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.902 08:49:29 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.902 08:49:29 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 456131' 00:04:03.902 killing process with pid 456131 00:04:03.902 08:49:29 alias_rpc -- common/autotest_common.sh@973 -- # kill 456131 00:04:03.902 08:49:29 alias_rpc -- common/autotest_common.sh@978 -- # wait 456131 00:04:04.162 00:04:04.162 real 0m1.507s 00:04:04.162 user 0m1.662s 00:04:04.162 sys 0m0.421s 00:04:04.162 08:49:29 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.162 08:49:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.162 ************************************ 00:04:04.162 END TEST alias_rpc 00:04:04.162 ************************************ 00:04:04.162 08:49:29 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:04.162 08:49:29 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:04.162 08:49:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.162 08:49:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.162 08:49:29 -- common/autotest_common.sh@10 -- # set +x 00:04:04.162 ************************************ 00:04:04.162 START TEST spdkcli_tcp 00:04:04.162 ************************************ 00:04:04.162 08:49:29 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:04.424 * Looking for test storage... 00:04:04.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.424 08:49:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.424 --rc genhtml_branch_coverage=1 00:04:04.424 --rc genhtml_function_coverage=1 00:04:04.424 --rc genhtml_legend=1 00:04:04.424 --rc geninfo_all_blocks=1 00:04:04.424 --rc geninfo_unexecuted_blocks=1 00:04:04.424 00:04:04.424 ' 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.424 --rc genhtml_branch_coverage=1 00:04:04.424 --rc genhtml_function_coverage=1 00:04:04.424 --rc genhtml_legend=1 00:04:04.424 --rc geninfo_all_blocks=1 00:04:04.424 --rc geninfo_unexecuted_blocks=1 00:04:04.424 00:04:04.424 ' 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.424 --rc genhtml_branch_coverage=1 00:04:04.424 --rc genhtml_function_coverage=1 00:04:04.424 --rc genhtml_legend=1 00:04:04.424 --rc geninfo_all_blocks=1 00:04:04.424 --rc geninfo_unexecuted_blocks=1 00:04:04.424 00:04:04.424 ' 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.424 --rc genhtml_branch_coverage=1 00:04:04.424 --rc genhtml_function_coverage=1 00:04:04.424 --rc genhtml_legend=1 00:04:04.424 --rc geninfo_all_blocks=1 00:04:04.424 --rc geninfo_unexecuted_blocks=1 00:04:04.424 00:04:04.424 ' 00:04:04.424 08:49:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:04.424 08:49:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:04.424 08:49:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:04.424 08:49:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:04.424 08:49:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:04.424 08:49:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:04.424 08:49:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.424 08:49:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=456476 00:04:04.424 08:49:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 456476 00:04:04.424 08:49:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 456476 ']' 00:04:04.424 08:49:29 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.425 08:49:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.425 08:49:29 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.425 08:49:29 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.425 08:49:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.425 [2024-11-20 08:49:29.901264] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:04.425 [2024-11-20 08:49:29.901342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456476 ] 00:04:04.686 [2024-11-20 08:49:29.988149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:04.686 [2024-11-20 08:49:30.025313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.686 [2024-11-20 08:49:30.025405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.258 08:49:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.258 08:49:30 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:05.258 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:05.258 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=456661 00:04:05.258 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:05.520 [ 00:04:05.520 "bdev_malloc_delete", 00:04:05.520 "bdev_malloc_create", 00:04:05.520 "bdev_null_resize", 00:04:05.520 "bdev_null_delete", 00:04:05.520 "bdev_null_create", 00:04:05.520 "bdev_nvme_cuse_unregister", 00:04:05.520 "bdev_nvme_cuse_register", 00:04:05.520 "bdev_opal_new_user", 00:04:05.520 "bdev_opal_set_lock_state", 00:04:05.520 "bdev_opal_delete", 00:04:05.520 "bdev_opal_get_info", 00:04:05.520 "bdev_opal_create", 00:04:05.520 "bdev_nvme_opal_revert", 00:04:05.520 "bdev_nvme_opal_init", 00:04:05.520 "bdev_nvme_send_cmd", 00:04:05.520 "bdev_nvme_set_keys", 00:04:05.520 "bdev_nvme_get_path_iostat", 00:04:05.520 "bdev_nvme_get_mdns_discovery_info", 00:04:05.520 "bdev_nvme_stop_mdns_discovery", 00:04:05.520 "bdev_nvme_start_mdns_discovery", 00:04:05.520 "bdev_nvme_set_multipath_policy", 00:04:05.520 "bdev_nvme_set_preferred_path", 00:04:05.520 "bdev_nvme_get_io_paths", 00:04:05.520 "bdev_nvme_remove_error_injection", 00:04:05.520 "bdev_nvme_add_error_injection", 00:04:05.520 "bdev_nvme_get_discovery_info", 00:04:05.520 "bdev_nvme_stop_discovery", 00:04:05.520 "bdev_nvme_start_discovery", 00:04:05.520 "bdev_nvme_get_controller_health_info", 00:04:05.520 "bdev_nvme_disable_controller", 00:04:05.520 "bdev_nvme_enable_controller", 00:04:05.520 "bdev_nvme_reset_controller", 00:04:05.520 "bdev_nvme_get_transport_statistics", 00:04:05.520 "bdev_nvme_apply_firmware", 00:04:05.520 "bdev_nvme_detach_controller", 00:04:05.520 "bdev_nvme_get_controllers", 00:04:05.520 "bdev_nvme_attach_controller", 00:04:05.520 "bdev_nvme_set_hotplug", 00:04:05.520 "bdev_nvme_set_options", 00:04:05.520 "bdev_passthru_delete", 00:04:05.520 "bdev_passthru_create", 00:04:05.520 "bdev_lvol_set_parent_bdev", 00:04:05.520 "bdev_lvol_set_parent", 00:04:05.520 "bdev_lvol_check_shallow_copy", 00:04:05.520 "bdev_lvol_start_shallow_copy", 00:04:05.520 "bdev_lvol_grow_lvstore", 00:04:05.520 "bdev_lvol_get_lvols", 00:04:05.520 "bdev_lvol_get_lvstores", 00:04:05.520 "bdev_lvol_delete", 00:04:05.520 "bdev_lvol_set_read_only", 00:04:05.520 "bdev_lvol_resize", 00:04:05.520 "bdev_lvol_decouple_parent", 00:04:05.520 "bdev_lvol_inflate", 00:04:05.520 "bdev_lvol_rename", 00:04:05.520 "bdev_lvol_clone_bdev", 00:04:05.520 "bdev_lvol_clone", 00:04:05.520 "bdev_lvol_snapshot", 00:04:05.520 "bdev_lvol_create", 00:04:05.520 "bdev_lvol_delete_lvstore", 00:04:05.520 "bdev_lvol_rename_lvstore", 00:04:05.520 "bdev_lvol_create_lvstore", 00:04:05.520 "bdev_raid_set_options", 00:04:05.520 "bdev_raid_remove_base_bdev", 00:04:05.520 "bdev_raid_add_base_bdev", 00:04:05.520 "bdev_raid_delete", 00:04:05.520 "bdev_raid_create", 00:04:05.520 "bdev_raid_get_bdevs", 00:04:05.520 "bdev_error_inject_error", 00:04:05.520 "bdev_error_delete", 00:04:05.520 "bdev_error_create", 00:04:05.520 "bdev_split_delete", 00:04:05.520 "bdev_split_create", 00:04:05.520 "bdev_delay_delete", 00:04:05.520 "bdev_delay_create", 00:04:05.520 "bdev_delay_update_latency", 00:04:05.520 "bdev_zone_block_delete", 00:04:05.520 "bdev_zone_block_create", 00:04:05.520 "blobfs_create", 00:04:05.520 "blobfs_detect", 00:04:05.520 "blobfs_set_cache_size", 00:04:05.520 "bdev_aio_delete", 00:04:05.520 "bdev_aio_rescan", 00:04:05.520 "bdev_aio_create", 00:04:05.520 "bdev_ftl_set_property", 00:04:05.520 "bdev_ftl_get_properties", 00:04:05.520 "bdev_ftl_get_stats", 00:04:05.520 "bdev_ftl_unmap", 00:04:05.520 "bdev_ftl_unload", 00:04:05.520 "bdev_ftl_delete", 00:04:05.520 "bdev_ftl_load", 00:04:05.520 "bdev_ftl_create", 00:04:05.520 "bdev_virtio_attach_controller", 00:04:05.520 "bdev_virtio_scsi_get_devices", 00:04:05.520 "bdev_virtio_detach_controller", 00:04:05.520 "bdev_virtio_blk_set_hotplug", 00:04:05.520 "bdev_iscsi_delete", 00:04:05.521 "bdev_iscsi_create", 00:04:05.521 "bdev_iscsi_set_options", 00:04:05.521 "accel_error_inject_error", 00:04:05.521 "ioat_scan_accel_module", 00:04:05.521 "dsa_scan_accel_module", 00:04:05.521 "iaa_scan_accel_module", 00:04:05.521 "vfu_virtio_create_fs_endpoint", 00:04:05.521 "vfu_virtio_create_scsi_endpoint", 00:04:05.521 "vfu_virtio_scsi_remove_target", 00:04:05.521 "vfu_virtio_scsi_add_target", 00:04:05.521 "vfu_virtio_create_blk_endpoint", 00:04:05.521 "vfu_virtio_delete_endpoint", 00:04:05.521 "keyring_file_remove_key", 00:04:05.521 "keyring_file_add_key", 00:04:05.521 "keyring_linux_set_options", 00:04:05.521 "fsdev_aio_delete", 00:04:05.521 "fsdev_aio_create", 00:04:05.521 "iscsi_get_histogram", 00:04:05.521 "iscsi_enable_histogram", 00:04:05.521 "iscsi_set_options", 00:04:05.521 "iscsi_get_auth_groups", 00:04:05.521 "iscsi_auth_group_remove_secret", 00:04:05.521 "iscsi_auth_group_add_secret", 00:04:05.521 "iscsi_delete_auth_group", 00:04:05.521 "iscsi_create_auth_group", 00:04:05.521 "iscsi_set_discovery_auth", 00:04:05.521 "iscsi_get_options", 00:04:05.521 "iscsi_target_node_request_logout", 00:04:05.521 "iscsi_target_node_set_redirect", 00:04:05.521 "iscsi_target_node_set_auth", 00:04:05.521 "iscsi_target_node_add_lun", 00:04:05.521 "iscsi_get_stats", 00:04:05.521 "iscsi_get_connections", 00:04:05.521 "iscsi_portal_group_set_auth", 00:04:05.521 "iscsi_start_portal_group", 00:04:05.521 "iscsi_delete_portal_group", 00:04:05.521 "iscsi_create_portal_group", 00:04:05.521 "iscsi_get_portal_groups", 00:04:05.521 "iscsi_delete_target_node", 00:04:05.521 "iscsi_target_node_remove_pg_ig_maps", 00:04:05.521 "iscsi_target_node_add_pg_ig_maps", 00:04:05.521 "iscsi_create_target_node", 00:04:05.521 "iscsi_get_target_nodes", 00:04:05.521 "iscsi_delete_initiator_group", 00:04:05.521 "iscsi_initiator_group_remove_initiators", 00:04:05.521 "iscsi_initiator_group_add_initiators", 00:04:05.521 "iscsi_create_initiator_group", 00:04:05.521 "iscsi_get_initiator_groups", 00:04:05.521 "nvmf_set_crdt", 00:04:05.521 "nvmf_set_config", 00:04:05.521 "nvmf_set_max_subsystems", 00:04:05.521 "nvmf_stop_mdns_prr", 00:04:05.521 "nvmf_publish_mdns_prr", 00:04:05.521 "nvmf_subsystem_get_listeners", 00:04:05.521 "nvmf_subsystem_get_qpairs", 00:04:05.521 "nvmf_subsystem_get_controllers", 00:04:05.521 "nvmf_get_stats", 00:04:05.521 "nvmf_get_transports", 00:04:05.521 "nvmf_create_transport", 00:04:05.521 "nvmf_get_targets", 00:04:05.521 "nvmf_delete_target", 00:04:05.521 "nvmf_create_target", 00:04:05.521 "nvmf_subsystem_allow_any_host", 00:04:05.521 "nvmf_subsystem_set_keys", 00:04:05.521 "nvmf_subsystem_remove_host", 00:04:05.521 "nvmf_subsystem_add_host", 00:04:05.521 "nvmf_ns_remove_host", 00:04:05.521 "nvmf_ns_add_host", 00:04:05.521 "nvmf_subsystem_remove_ns", 00:04:05.521 "nvmf_subsystem_set_ns_ana_group", 00:04:05.521 "nvmf_subsystem_add_ns", 00:04:05.521 "nvmf_subsystem_listener_set_ana_state", 00:04:05.521 "nvmf_discovery_get_referrals", 00:04:05.521 "nvmf_discovery_remove_referral", 00:04:05.521 "nvmf_discovery_add_referral", 00:04:05.521 "nvmf_subsystem_remove_listener", 00:04:05.521 "nvmf_subsystem_add_listener", 00:04:05.521 "nvmf_delete_subsystem", 00:04:05.521 "nvmf_create_subsystem", 00:04:05.521 "nvmf_get_subsystems", 00:04:05.521 "env_dpdk_get_mem_stats", 00:04:05.521 "nbd_get_disks", 00:04:05.521 "nbd_stop_disk", 00:04:05.521 "nbd_start_disk", 00:04:05.521 "ublk_recover_disk", 00:04:05.521 "ublk_get_disks", 00:04:05.521 "ublk_stop_disk", 00:04:05.521 "ublk_start_disk", 00:04:05.521 "ublk_destroy_target", 00:04:05.521 "ublk_create_target", 00:04:05.521 "virtio_blk_create_transport", 00:04:05.521 "virtio_blk_get_transports", 00:04:05.521 "vhost_controller_set_coalescing", 00:04:05.521 "vhost_get_controllers", 00:04:05.521 "vhost_delete_controller", 00:04:05.521 "vhost_create_blk_controller", 00:04:05.521 "vhost_scsi_controller_remove_target", 00:04:05.521 "vhost_scsi_controller_add_target", 00:04:05.521 "vhost_start_scsi_controller", 00:04:05.521 "vhost_create_scsi_controller", 00:04:05.521 "thread_set_cpumask", 00:04:05.521 "scheduler_set_options", 00:04:05.521 "framework_get_governor", 00:04:05.521 "framework_get_scheduler", 00:04:05.521 "framework_set_scheduler", 00:04:05.521 "framework_get_reactors", 00:04:05.521 "thread_get_io_channels", 00:04:05.521 "thread_get_pollers", 00:04:05.521 "thread_get_stats", 00:04:05.521 "framework_monitor_context_switch", 00:04:05.521 "spdk_kill_instance", 00:04:05.521 "log_enable_timestamps", 00:04:05.521 "log_get_flags", 00:04:05.521 "log_clear_flag", 00:04:05.521 "log_set_flag", 00:04:05.521 "log_get_level", 00:04:05.521 "log_set_level", 00:04:05.521 "log_get_print_level", 00:04:05.521 "log_set_print_level", 00:04:05.521 "framework_enable_cpumask_locks", 00:04:05.521 "framework_disable_cpumask_locks", 00:04:05.521 "framework_wait_init", 00:04:05.521 "framework_start_init", 00:04:05.521 "scsi_get_devices", 00:04:05.521 "bdev_get_histogram", 00:04:05.521 "bdev_enable_histogram", 00:04:05.521 "bdev_set_qos_limit", 00:04:05.521 "bdev_set_qd_sampling_period", 00:04:05.521 "bdev_get_bdevs", 00:04:05.521 "bdev_reset_iostat", 00:04:05.521 "bdev_get_iostat", 00:04:05.521 "bdev_examine", 00:04:05.521 "bdev_wait_for_examine", 00:04:05.521 "bdev_set_options", 00:04:05.521 "accel_get_stats", 00:04:05.521 "accel_set_options", 00:04:05.521 "accel_set_driver", 00:04:05.521 "accel_crypto_key_destroy", 00:04:05.521 "accel_crypto_keys_get", 00:04:05.521 "accel_crypto_key_create", 00:04:05.521 "accel_assign_opc", 00:04:05.521 "accel_get_module_info", 00:04:05.521 "accel_get_opc_assignments", 00:04:05.521 "vmd_rescan", 00:04:05.521 "vmd_remove_device", 00:04:05.521 "vmd_enable", 00:04:05.521 "sock_get_default_impl", 00:04:05.521 "sock_set_default_impl", 00:04:05.521 "sock_impl_set_options", 00:04:05.521 "sock_impl_get_options", 00:04:05.521 "iobuf_get_stats", 00:04:05.521 "iobuf_set_options", 00:04:05.521 "keyring_get_keys", 00:04:05.521 "vfu_tgt_set_base_path", 00:04:05.521 "framework_get_pci_devices", 00:04:05.521 "framework_get_config", 00:04:05.521 "framework_get_subsystems", 00:04:05.521 "fsdev_set_opts", 00:04:05.521 "fsdev_get_opts", 00:04:05.521 "trace_get_info", 00:04:05.521 "trace_get_tpoint_group_mask", 00:04:05.521 "trace_disable_tpoint_group", 00:04:05.521 "trace_enable_tpoint_group", 00:04:05.521 "trace_clear_tpoint_mask", 00:04:05.521 "trace_set_tpoint_mask", 00:04:05.521 "notify_get_notifications", 00:04:05.521 "notify_get_types", 00:04:05.521 "spdk_get_version", 00:04:05.521 "rpc_get_methods" 00:04:05.521 ] 00:04:05.521 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.521 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:05.521 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 456476 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 456476 ']' 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 456476 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 456476 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 456476' 00:04:05.521 killing process with pid 456476 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 456476 00:04:05.521 08:49:30 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 456476 00:04:05.782 00:04:05.782 real 0m1.535s 00:04:05.782 user 0m2.782s 00:04:05.782 sys 0m0.464s 00:04:05.782 08:49:31 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.782 08:49:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.782 ************************************ 00:04:05.782 END TEST spdkcli_tcp 00:04:05.782 ************************************ 00:04:05.782 08:49:31 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:05.782 08:49:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.782 08:49:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.782 08:49:31 -- common/autotest_common.sh@10 -- # set +x 00:04:05.782 ************************************ 00:04:05.782 START TEST dpdk_mem_utility 00:04:05.782 ************************************ 00:04:05.782 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:06.046 * Looking for test storage... 00:04:06.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:06.046 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.046 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.046 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.046 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.046 08:49:31 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:06.046 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.046 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.046 --rc genhtml_branch_coverage=1 00:04:06.046 --rc genhtml_function_coverage=1 00:04:06.046 --rc genhtml_legend=1 00:04:06.046 --rc geninfo_all_blocks=1 00:04:06.046 --rc geninfo_unexecuted_blocks=1 00:04:06.046 00:04:06.046 ' 00:04:06.046 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.046 --rc genhtml_branch_coverage=1 00:04:06.046 --rc genhtml_function_coverage=1 00:04:06.047 --rc genhtml_legend=1 00:04:06.047 --rc geninfo_all_blocks=1 00:04:06.047 --rc geninfo_unexecuted_blocks=1 00:04:06.047 00:04:06.047 ' 00:04:06.047 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.047 --rc genhtml_branch_coverage=1 00:04:06.047 --rc genhtml_function_coverage=1 00:04:06.047 --rc genhtml_legend=1 00:04:06.047 --rc geninfo_all_blocks=1 00:04:06.047 --rc geninfo_unexecuted_blocks=1 00:04:06.047 00:04:06.047 ' 00:04:06.047 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.047 --rc genhtml_branch_coverage=1 00:04:06.047 --rc genhtml_function_coverage=1 00:04:06.047 --rc genhtml_legend=1 00:04:06.047 --rc geninfo_all_blocks=1 00:04:06.047 --rc geninfo_unexecuted_blocks=1 00:04:06.047 00:04:06.047 ' 00:04:06.047 08:49:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:06.047 08:49:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=456817 00:04:06.047 08:49:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 456817 00:04:06.047 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 456817 ']' 00:04:06.047 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.047 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.047 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.047 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.047 08:49:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:06.047 08:49:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.047 [2024-11-20 08:49:31.485513] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:06.047 [2024-11-20 08:49:31.485589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456817 ] 00:04:06.308 [2024-11-20 08:49:31.575523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.308 [2024-11-20 08:49:31.617082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.882 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.882 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:06.882 08:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:06.882 08:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:06.882 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.882 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:06.882 { 00:04:06.882 "filename": "/tmp/spdk_mem_dump.txt" 00:04:06.882 } 00:04:06.882 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.882 08:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:06.882 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:06.882 1 heaps totaling size 810.000000 MiB 00:04:06.882 size: 810.000000 MiB heap id: 0 00:04:06.882 end heaps---------- 00:04:06.882 9 mempools totaling size 595.772034 MiB 00:04:06.882 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:06.882 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:06.882 size: 92.545471 MiB name: bdev_io_456817 00:04:06.882 size: 50.003479 MiB name: msgpool_456817 00:04:06.882 size: 36.509338 MiB name: fsdev_io_456817 00:04:06.882 size: 21.763794 MiB name: PDU_Pool 00:04:06.882 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:06.882 size: 4.133484 MiB name: evtpool_456817 00:04:06.882 size: 0.026123 MiB name: Session_Pool 00:04:06.882 end mempools------- 00:04:06.882 6 memzones totaling size 4.142822 MiB 00:04:06.882 size: 1.000366 MiB name: RG_ring_0_456817 00:04:06.882 size: 1.000366 MiB name: RG_ring_1_456817 00:04:06.882 size: 1.000366 MiB name: RG_ring_4_456817 00:04:06.882 size: 1.000366 MiB name: RG_ring_5_456817 00:04:06.882 size: 0.125366 MiB name: RG_ring_2_456817 00:04:06.882 size: 0.015991 MiB name: RG_ring_3_456817 00:04:06.882 end memzones------- 00:04:06.882 08:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:06.882 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:06.882 list of free elements. size: 10.862488 MiB 00:04:06.882 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:06.882 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:06.882 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:06.882 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:06.882 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:06.882 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:06.882 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:06.882 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:06.882 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:06.882 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:06.882 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:06.883 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:06.883 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:06.883 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:06.883 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:06.883 list of standard malloc elements. size: 199.218628 MiB 00:04:06.883 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:06.883 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:06.883 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:06.883 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:06.883 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:06.883 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:06.883 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:06.883 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:06.883 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:06.883 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:06.883 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:06.883 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:06.883 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:06.883 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:06.883 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:06.883 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:06.883 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:06.883 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:06.883 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:06.883 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:06.883 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:06.883 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:06.883 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:06.883 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:06.883 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:06.883 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:06.883 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:06.883 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:06.883 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:06.883 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:06.883 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:06.883 list of memzone associated elements. size: 599.918884 MiB 00:04:06.883 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:06.883 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:06.883 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:06.883 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:06.883 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:06.883 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_456817_0 00:04:06.883 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:06.883 associated memzone info: size: 48.002930 MiB name: MP_msgpool_456817_0 00:04:06.883 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:06.883 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_456817_0 00:04:06.883 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:06.883 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:06.883 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:06.883 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:06.883 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:06.883 associated memzone info: size: 3.000122 MiB name: MP_evtpool_456817_0 00:04:06.883 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:06.883 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_456817 00:04:06.883 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:06.883 associated memzone info: size: 1.007996 MiB name: MP_evtpool_456817 00:04:06.883 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:06.883 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:06.883 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:06.883 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:06.883 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:06.883 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:06.883 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:06.883 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:06.883 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:06.883 associated memzone info: size: 1.000366 MiB name: RG_ring_0_456817 00:04:06.883 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:06.883 associated memzone info: size: 1.000366 MiB name: RG_ring_1_456817 00:04:06.883 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:06.883 associated memzone info: size: 1.000366 MiB name: RG_ring_4_456817 00:04:06.883 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:06.883 associated memzone info: size: 1.000366 MiB name: RG_ring_5_456817 00:04:06.883 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:06.883 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_456817 00:04:06.883 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:06.883 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_456817 00:04:06.883 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:06.883 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:06.883 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:06.883 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:06.883 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:06.883 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:06.883 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:06.883 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_456817 00:04:06.883 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:06.883 associated memzone info: size: 0.125366 MiB name: RG_ring_2_456817 00:04:06.883 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:06.883 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:06.883 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:06.883 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:06.883 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:06.883 associated memzone info: size: 0.015991 MiB name: RG_ring_3_456817 00:04:06.883 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:06.883 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:06.883 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:06.883 associated memzone info: size: 0.000183 MiB name: MP_msgpool_456817 00:04:06.883 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:06.883 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_456817 00:04:06.883 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:06.883 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_456817 00:04:06.883 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:06.883 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:06.883 08:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:06.883 08:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 456817 00:04:06.883 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 456817 ']' 00:04:06.883 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 456817 00:04:06.883 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:07.144 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.144 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 456817 00:04:07.144 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.144 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.144 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 456817' 00:04:07.144 killing process with pid 456817 00:04:07.144 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 456817 00:04:07.144 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 456817 00:04:07.144 00:04:07.144 real 0m1.428s 00:04:07.144 user 0m1.513s 00:04:07.144 sys 0m0.426s 00:04:07.144 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.144 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:07.144 ************************************ 00:04:07.144 END TEST dpdk_mem_utility 00:04:07.144 ************************************ 00:04:07.405 08:49:32 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:07.406 08:49:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.406 08:49:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.406 08:49:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.406 ************************************ 00:04:07.406 START TEST event 00:04:07.406 ************************************ 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:07.406 * Looking for test storage... 00:04:07.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.406 08:49:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.406 08:49:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.406 08:49:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.406 08:49:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.406 08:49:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.406 08:49:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.406 08:49:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.406 08:49:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.406 08:49:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.406 08:49:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.406 08:49:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.406 08:49:32 event -- scripts/common.sh@344 -- # case "$op" in 00:04:07.406 08:49:32 event -- scripts/common.sh@345 -- # : 1 00:04:07.406 08:49:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.406 08:49:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.406 08:49:32 event -- scripts/common.sh@365 -- # decimal 1 00:04:07.406 08:49:32 event -- scripts/common.sh@353 -- # local d=1 00:04:07.406 08:49:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.406 08:49:32 event -- scripts/common.sh@355 -- # echo 1 00:04:07.406 08:49:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.406 08:49:32 event -- scripts/common.sh@366 -- # decimal 2 00:04:07.406 08:49:32 event -- scripts/common.sh@353 -- # local d=2 00:04:07.406 08:49:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.406 08:49:32 event -- scripts/common.sh@355 -- # echo 2 00:04:07.406 08:49:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.406 08:49:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.406 08:49:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.406 08:49:32 event -- scripts/common.sh@368 -- # return 0 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.406 --rc genhtml_branch_coverage=1 00:04:07.406 --rc genhtml_function_coverage=1 00:04:07.406 --rc genhtml_legend=1 00:04:07.406 --rc geninfo_all_blocks=1 00:04:07.406 --rc geninfo_unexecuted_blocks=1 00:04:07.406 00:04:07.406 ' 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.406 --rc genhtml_branch_coverage=1 00:04:07.406 --rc genhtml_function_coverage=1 00:04:07.406 --rc genhtml_legend=1 00:04:07.406 --rc geninfo_all_blocks=1 00:04:07.406 --rc geninfo_unexecuted_blocks=1 00:04:07.406 00:04:07.406 ' 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.406 --rc genhtml_branch_coverage=1 00:04:07.406 --rc genhtml_function_coverage=1 00:04:07.406 --rc genhtml_legend=1 00:04:07.406 --rc geninfo_all_blocks=1 00:04:07.406 --rc geninfo_unexecuted_blocks=1 00:04:07.406 00:04:07.406 ' 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.406 --rc genhtml_branch_coverage=1 00:04:07.406 --rc genhtml_function_coverage=1 00:04:07.406 --rc genhtml_legend=1 00:04:07.406 --rc geninfo_all_blocks=1 00:04:07.406 --rc geninfo_unexecuted_blocks=1 00:04:07.406 00:04:07.406 ' 00:04:07.406 08:49:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:07.406 08:49:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:07.406 08:49:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:07.406 08:49:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.406 08:49:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.667 ************************************ 00:04:07.667 START TEST event_perf 00:04:07.667 ************************************ 00:04:07.667 08:49:32 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:07.667 Running I/O for 1 seconds...[2024-11-20 08:49:32.990408] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:07.667 [2024-11-20 08:49:32.990511] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457165 ] 00:04:07.667 [2024-11-20 08:49:33.080464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:07.667 [2024-11-20 08:49:33.118856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.667 [2024-11-20 08:49:33.119010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:07.667 [2024-11-20 08:49:33.119165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.667 Running I/O for 1 seconds...[2024-11-20 08:49:33.119180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:09.051 00:04:09.051 lcore 0: 177915 00:04:09.051 lcore 1: 177918 00:04:09.051 lcore 2: 177918 00:04:09.051 lcore 3: 177917 00:04:09.051 done. 00:04:09.051 00:04:09.051 real 0m1.179s 00:04:09.051 user 0m4.090s 00:04:09.051 sys 0m0.085s 00:04:09.051 08:49:34 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.051 08:49:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:09.051 ************************************ 00:04:09.051 END TEST event_perf 00:04:09.051 ************************************ 00:04:09.051 08:49:34 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:09.051 08:49:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:09.051 08:49:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.051 08:49:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.051 ************************************ 00:04:09.051 START TEST event_reactor 00:04:09.051 ************************************ 00:04:09.051 08:49:34 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:09.051 [2024-11-20 08:49:34.245959] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:09.051 [2024-11-20 08:49:34.246040] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457501 ] 00:04:09.051 [2024-11-20 08:49:34.335645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.051 [2024-11-20 08:49:34.374420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.992 test_start 00:04:09.992 oneshot 00:04:09.992 tick 100 00:04:09.992 tick 100 00:04:09.992 tick 250 00:04:09.992 tick 100 00:04:09.992 tick 100 00:04:09.992 tick 250 00:04:09.992 tick 100 00:04:09.992 tick 500 00:04:09.992 tick 100 00:04:09.992 tick 100 00:04:09.992 tick 250 00:04:09.992 tick 100 00:04:09.992 tick 100 00:04:09.992 test_end 00:04:09.992 00:04:09.992 real 0m1.175s 00:04:09.992 user 0m1.093s 00:04:09.992 sys 0m0.077s 00:04:09.992 08:49:35 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.992 08:49:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:09.992 ************************************ 00:04:09.992 END TEST event_reactor 00:04:09.993 ************************************ 00:04:09.993 08:49:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.993 08:49:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:09.993 08:49:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.993 08:49:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.993 ************************************ 00:04:09.993 START TEST event_reactor_perf 00:04:09.993 ************************************ 00:04:09.993 08:49:35 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.993 [2024-11-20 08:49:35.498798] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:09.993 [2024-11-20 08:49:35.498902] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457853 ] 00:04:10.253 [2024-11-20 08:49:35.586684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.253 [2024-11-20 08:49:35.624775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.195 test_start 00:04:11.195 test_end 00:04:11.195 Performance: 539586 events per second 00:04:11.195 00:04:11.195 real 0m1.174s 00:04:11.195 user 0m1.094s 00:04:11.195 sys 0m0.077s 00:04:11.195 08:49:36 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.195 08:49:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:11.195 ************************************ 00:04:11.195 END TEST event_reactor_perf 00:04:11.195 ************************************ 00:04:11.195 08:49:36 event -- event/event.sh@49 -- # uname -s 00:04:11.195 08:49:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:11.195 08:49:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:11.195 08:49:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.195 08:49:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.195 08:49:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.456 ************************************ 00:04:11.456 START TEST event_scheduler 00:04:11.456 ************************************ 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:11.456 * Looking for test storage... 00:04:11.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.456 08:49:36 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:11.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.456 --rc genhtml_branch_coverage=1 00:04:11.456 --rc genhtml_function_coverage=1 00:04:11.456 --rc genhtml_legend=1 00:04:11.456 --rc geninfo_all_blocks=1 00:04:11.456 --rc geninfo_unexecuted_blocks=1 00:04:11.456 00:04:11.456 ' 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:11.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.456 --rc genhtml_branch_coverage=1 00:04:11.456 --rc genhtml_function_coverage=1 00:04:11.456 --rc genhtml_legend=1 00:04:11.456 --rc geninfo_all_blocks=1 00:04:11.456 --rc geninfo_unexecuted_blocks=1 00:04:11.456 00:04:11.456 ' 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:11.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.456 --rc genhtml_branch_coverage=1 00:04:11.456 --rc genhtml_function_coverage=1 00:04:11.456 --rc genhtml_legend=1 00:04:11.456 --rc geninfo_all_blocks=1 00:04:11.456 --rc geninfo_unexecuted_blocks=1 00:04:11.456 00:04:11.456 ' 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:11.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.456 --rc genhtml_branch_coverage=1 00:04:11.456 --rc genhtml_function_coverage=1 00:04:11.456 --rc genhtml_legend=1 00:04:11.456 --rc geninfo_all_blocks=1 00:04:11.456 --rc geninfo_unexecuted_blocks=1 00:04:11.456 00:04:11.456 ' 00:04:11.456 08:49:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:11.456 08:49:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=458222 00:04:11.456 08:49:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.456 08:49:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:11.456 08:49:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 458222 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 458222 ']' 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.456 08:49:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.716 [2024-11-20 08:49:36.993919] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:11.716 [2024-11-20 08:49:36.993990] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458222 ] 00:04:11.716 [2024-11-20 08:49:37.085074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:11.716 [2024-11-20 08:49:37.141217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.716 [2024-11-20 08:49:37.141318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.716 [2024-11-20 08:49:37.141465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.716 [2024-11-20 08:49:37.141468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:12.287 08:49:37 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.287 08:49:37 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:12.287 08:49:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:12.287 08:49:37 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.287 08:49:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.287 [2024-11-20 08:49:37.800003] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:12.287 [2024-11-20 08:49:37.800023] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:12.287 [2024-11-20 08:49:37.800033] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:12.287 [2024-11-20 08:49:37.800039] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:12.287 [2024-11-20 08:49:37.800044] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:12.287 08:49:37 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.287 08:49:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:12.287 08:49:37 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.287 08:49:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 [2024-11-20 08:49:37.862437] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:12.548 08:49:37 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.548 08:49:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:12.548 08:49:37 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.548 08:49:37 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.548 08:49:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 ************************************ 00:04:12.548 START TEST scheduler_create_thread 00:04:12.548 ************************************ 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 2 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 3 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 4 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 5 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 6 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 7 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 8 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.548 08:49:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 9 00:04:12.548 08:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.548 08:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:12.548 08:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.548 08:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.119 10 00:04:13.119 08:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.119 08:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:13.119 08:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.119 08:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.503 08:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.503 08:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:14.503 08:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:14.503 08:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.503 08:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.445 08:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.445 08:49:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:15.445 08:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.445 08:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.052 08:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.052 08:49:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:16.052 08:49:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:16.052 08:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.052 08:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.727 08:49:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.727 00:04:16.727 real 0m4.225s 00:04:16.727 user 0m0.021s 00:04:16.727 sys 0m0.011s 00:04:16.727 08:49:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.727 08:49:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.727 ************************************ 00:04:16.727 END TEST scheduler_create_thread 00:04:16.727 ************************************ 00:04:16.727 08:49:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:16.727 08:49:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 458222 00:04:16.727 08:49:42 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 458222 ']' 00:04:16.727 08:49:42 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 458222 00:04:16.727 08:49:42 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:16.727 08:49:42 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.727 08:49:42 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 458222 00:04:16.727 08:49:42 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:16.727 08:49:42 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:16.727 08:49:42 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 458222' 00:04:16.727 killing process with pid 458222 00:04:16.727 08:49:42 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 458222 00:04:16.727 08:49:42 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 458222 00:04:16.988 [2024-11-20 08:49:42.508354] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:17.249 00:04:17.249 real 0m5.929s 00:04:17.249 user 0m13.789s 00:04:17.249 sys 0m0.429s 00:04:17.249 08:49:42 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.249 08:49:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.249 ************************************ 00:04:17.249 END TEST event_scheduler 00:04:17.249 ************************************ 00:04:17.250 08:49:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:17.250 08:49:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:17.250 08:49:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.250 08:49:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.250 08:49:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.250 ************************************ 00:04:17.250 START TEST app_repeat 00:04:17.250 ************************************ 00:04:17.250 08:49:42 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=459318 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 459318' 00:04:17.250 Process app_repeat pid: 459318 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:17.250 spdk_app_start Round 0 00:04:17.250 08:49:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 459318 /var/tmp/spdk-nbd.sock 00:04:17.250 08:49:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 459318 ']' 00:04:17.250 08:49:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.250 08:49:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.250 08:49:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.250 08:49:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.250 08:49:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:17.510 [2024-11-20 08:49:42.783235] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:17.510 [2024-11-20 08:49:42.783307] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459318 ] 00:04:17.510 [2024-11-20 08:49:42.870923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.510 [2024-11-20 08:49:42.903952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.510 [2024-11-20 08:49:42.903954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.510 08:49:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.510 08:49:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:17.510 08:49:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:17.771 Malloc0 00:04:17.771 08:49:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.033 Malloc1 00:04:18.033 08:49:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.033 08:49:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.033 08:49:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.033 08:49:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:18.033 08:49:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.033 08:49:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:18.033 08:49:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.033 08:49:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.034 08:49:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.034 08:49:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:18.034 08:49:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.034 08:49:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:18.034 08:49:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:18.034 08:49:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:18.034 08:49:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.034 08:49:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:18.034 /dev/nbd0 00:04:18.034 08:49:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:18.295 08:49:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.295 1+0 records in 00:04:18.295 1+0 records out 00:04:18.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296892 s, 13.8 MB/s 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:18.295 08:49:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.295 08:49:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.295 08:49:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:18.295 /dev/nbd1 00:04:18.295 08:49:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:18.295 08:49:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.295 1+0 records in 00:04:18.295 1+0 records out 00:04:18.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279269 s, 14.7 MB/s 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.295 08:49:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:18.296 08:49:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:18.296 08:49:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.296 08:49:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.296 08:49:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.296 08:49:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.296 08:49:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:18.557 { 00:04:18.557 "nbd_device": "/dev/nbd0", 00:04:18.557 "bdev_name": "Malloc0" 00:04:18.557 }, 00:04:18.557 { 00:04:18.557 "nbd_device": "/dev/nbd1", 00:04:18.557 "bdev_name": "Malloc1" 00:04:18.557 } 00:04:18.557 ]' 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:18.557 { 00:04:18.557 "nbd_device": "/dev/nbd0", 00:04:18.557 "bdev_name": "Malloc0" 00:04:18.557 }, 00:04:18.557 { 00:04:18.557 "nbd_device": "/dev/nbd1", 00:04:18.557 "bdev_name": "Malloc1" 00:04:18.557 } 00:04:18.557 ]' 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:18.557 /dev/nbd1' 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:18.557 /dev/nbd1' 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:18.557 256+0 records in 00:04:18.557 256+0 records out 00:04:18.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121681 s, 86.2 MB/s 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.557 08:49:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:18.819 256+0 records in 00:04:18.819 256+0 records out 00:04:18.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123618 s, 84.8 MB/s 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:18.819 256+0 records in 00:04:18.819 256+0 records out 00:04:18.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126938 s, 82.6 MB/s 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.819 08:49:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:19.080 08:49:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:19.080 08:49:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:19.080 08:49:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:19.080 08:49:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.080 08:49:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.080 08:49:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:19.080 08:49:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.080 08:49:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.080 08:49:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.080 08:49:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.080 08:49:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.341 08:49:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:19.341 08:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:19.341 08:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.342 08:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:19.342 08:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:19.342 08:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.342 08:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:19.342 08:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:19.342 08:49:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:19.342 08:49:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:19.342 08:49:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:19.342 08:49:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:19.342 08:49:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:19.603 08:49:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:19.603 [2024-11-20 08:49:45.019674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:19.603 [2024-11-20 08:49:45.049083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.603 [2024-11-20 08:49:45.049083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.603 [2024-11-20 08:49:45.078105] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:19.603 [2024-11-20 08:49:45.078137] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:22.903 08:49:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:22.903 08:49:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:22.903 spdk_app_start Round 1 00:04:22.904 08:49:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 459318 /var/tmp/spdk-nbd.sock 00:04:22.904 08:49:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 459318 ']' 00:04:22.904 08:49:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.904 08:49:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.904 08:49:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.904 08:49:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.904 08:49:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:22.904 08:49:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.904 08:49:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:22.904 08:49:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.904 Malloc0 00:04:22.904 08:49:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.165 Malloc1 00:04:23.165 08:49:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.165 08:49:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.426 /dev/nbd0 00:04:23.426 08:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.426 08:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.426 1+0 records in 00:04:23.426 1+0 records out 00:04:23.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274268 s, 14.9 MB/s 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.426 08:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.426 08:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.426 08:49:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.426 /dev/nbd1 00:04:23.426 08:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.426 08:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.426 08:49:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.687 1+0 records in 00:04:23.687 1+0 records out 00:04:23.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164012 s, 25.0 MB/s 00:04:23.687 08:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.687 08:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.687 08:49:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.687 08:49:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.687 08:49:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.687 08:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.687 08:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.687 08:49:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.687 08:49:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.687 08:49:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:23.687 { 00:04:23.687 "nbd_device": "/dev/nbd0", 00:04:23.687 "bdev_name": "Malloc0" 00:04:23.687 }, 00:04:23.687 { 00:04:23.687 "nbd_device": "/dev/nbd1", 00:04:23.687 "bdev_name": "Malloc1" 00:04:23.687 } 00:04:23.687 ]' 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:23.687 { 00:04:23.687 "nbd_device": "/dev/nbd0", 00:04:23.687 "bdev_name": "Malloc0" 00:04:23.687 }, 00:04:23.687 { 00:04:23.687 "nbd_device": "/dev/nbd1", 00:04:23.687 "bdev_name": "Malloc1" 00:04:23.687 } 00:04:23.687 ]' 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:23.687 /dev/nbd1' 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:23.687 /dev/nbd1' 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:23.687 08:49:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:23.947 256+0 records in 00:04:23.947 256+0 records out 00:04:23.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126963 s, 82.6 MB/s 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:23.947 256+0 records in 00:04:23.947 256+0 records out 00:04:23.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123271 s, 85.1 MB/s 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:23.947 256+0 records in 00:04:23.947 256+0 records out 00:04:23.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012907 s, 81.2 MB/s 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.947 08:49:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.207 08:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.207 08:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.207 08:49:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.207 08:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.207 08:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.207 08:49:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.207 08:49:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.207 08:49:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.207 08:49:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.207 08:49:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.207 08:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.467 08:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.467 08:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.467 08:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.467 08:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.467 08:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.467 08:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.467 08:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:24.467 08:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.467 08:49:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.468 08:49:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.468 08:49:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.468 08:49:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.468 08:49:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.728 08:49:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:24.728 [2024-11-20 08:49:50.162752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.728 [2024-11-20 08:49:50.192665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.728 [2024-11-20 08:49:50.192665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.728 [2024-11-20 08:49:50.222303] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:24.728 [2024-11-20 08:49:50.222334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.028 08:49:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:28.028 08:49:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:28.028 spdk_app_start Round 2 00:04:28.028 08:49:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 459318 /var/tmp/spdk-nbd.sock 00:04:28.028 08:49:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 459318 ']' 00:04:28.028 08:49:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.028 08:49:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.028 08:49:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.028 08:49:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.028 08:49:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.028 08:49:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.028 08:49:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:28.028 08:49:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.028 Malloc0 00:04:28.028 08:49:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.289 Malloc1 00:04:28.289 08:49:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:28.289 08:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.290 08:49:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:28.550 /dev/nbd0 00:04:28.550 08:49:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:28.550 08:49:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.550 1+0 records in 00:04:28.550 1+0 records out 00:04:28.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278458 s, 14.7 MB/s 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:28.550 08:49:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:28.550 08:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.550 08:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.550 08:49:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:28.550 /dev/nbd1 00:04:28.811 08:49:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:28.811 08:49:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.811 1+0 records in 00:04:28.811 1+0 records out 00:04:28.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288065 s, 14.2 MB/s 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:28.811 08:49:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:28.811 08:49:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.811 08:49:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.811 08:49:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:28.811 08:49:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.811 08:49:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.811 08:49:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:28.811 { 00:04:28.811 "nbd_device": "/dev/nbd0", 00:04:28.811 "bdev_name": "Malloc0" 00:04:28.811 }, 00:04:28.811 { 00:04:28.811 "nbd_device": "/dev/nbd1", 00:04:28.811 "bdev_name": "Malloc1" 00:04:28.811 } 00:04:28.811 ]' 00:04:28.812 08:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:28.812 { 00:04:28.812 "nbd_device": "/dev/nbd0", 00:04:28.812 "bdev_name": "Malloc0" 00:04:28.812 }, 00:04:28.812 { 00:04:28.812 "nbd_device": "/dev/nbd1", 00:04:28.812 "bdev_name": "Malloc1" 00:04:28.812 } 00:04:28.812 ]' 00:04:28.812 08:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:29.073 /dev/nbd1' 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:29.073 /dev/nbd1' 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:29.073 256+0 records in 00:04:29.073 256+0 records out 00:04:29.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127403 s, 82.3 MB/s 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:29.073 256+0 records in 00:04:29.073 256+0 records out 00:04:29.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121163 s, 86.5 MB/s 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:29.073 256+0 records in 00:04:29.073 256+0 records out 00:04:29.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128857 s, 81.4 MB/s 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.073 08:49:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.335 08:49:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.596 08:49:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:29.596 08:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:29.596 08:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.596 08:49:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:29.596 08:49:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:29.596 08:49:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.596 08:49:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:29.596 08:49:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:29.596 08:49:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:29.596 08:49:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:29.596 08:49:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:29.596 08:49:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:29.596 08:49:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:29.858 08:49:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:29.858 [2024-11-20 08:49:55.318700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.858 [2024-11-20 08:49:55.348472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.858 [2024-11-20 08:49:55.348474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.858 [2024-11-20 08:49:55.377659] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:29.858 [2024-11-20 08:49:55.377694] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:33.157 08:49:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 459318 /var/tmp/spdk-nbd.sock 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 459318 ']' 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:33.157 08:49:58 event.app_repeat -- event/event.sh@39 -- # killprocess 459318 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 459318 ']' 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 459318 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 459318 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 459318' 00:04:33.157 killing process with pid 459318 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@973 -- # kill 459318 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@978 -- # wait 459318 00:04:33.157 spdk_app_start is called in Round 0. 00:04:33.157 Shutdown signal received, stop current app iteration 00:04:33.157 Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 reinitialization... 00:04:33.157 spdk_app_start is called in Round 1. 00:04:33.157 Shutdown signal received, stop current app iteration 00:04:33.157 Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 reinitialization... 00:04:33.157 spdk_app_start is called in Round 2. 00:04:33.157 Shutdown signal received, stop current app iteration 00:04:33.157 Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 reinitialization... 00:04:33.157 spdk_app_start is called in Round 3. 00:04:33.157 Shutdown signal received, stop current app iteration 00:04:33.157 08:49:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:33.157 08:49:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:33.157 00:04:33.157 real 0m15.837s 00:04:33.157 user 0m34.800s 00:04:33.157 sys 0m2.297s 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.157 08:49:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.157 ************************************ 00:04:33.157 END TEST app_repeat 00:04:33.157 ************************************ 00:04:33.157 08:49:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:33.157 08:49:58 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:33.157 08:49:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.157 08:49:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.157 08:49:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.157 ************************************ 00:04:33.157 START TEST cpu_locks 00:04:33.157 ************************************ 00:04:33.157 08:49:58 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:33.418 * Looking for test storage... 00:04:33.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.418 08:49:58 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.418 --rc genhtml_branch_coverage=1 00:04:33.418 --rc genhtml_function_coverage=1 00:04:33.418 --rc genhtml_legend=1 00:04:33.418 --rc geninfo_all_blocks=1 00:04:33.418 --rc geninfo_unexecuted_blocks=1 00:04:33.418 00:04:33.418 ' 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.418 --rc genhtml_branch_coverage=1 00:04:33.418 --rc genhtml_function_coverage=1 00:04:33.418 --rc genhtml_legend=1 00:04:33.418 --rc geninfo_all_blocks=1 00:04:33.418 --rc geninfo_unexecuted_blocks=1 00:04:33.418 00:04:33.418 ' 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.418 --rc genhtml_branch_coverage=1 00:04:33.418 --rc genhtml_function_coverage=1 00:04:33.418 --rc genhtml_legend=1 00:04:33.418 --rc geninfo_all_blocks=1 00:04:33.418 --rc geninfo_unexecuted_blocks=1 00:04:33.418 00:04:33.418 ' 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.418 --rc genhtml_branch_coverage=1 00:04:33.418 --rc genhtml_function_coverage=1 00:04:33.418 --rc genhtml_legend=1 00:04:33.418 --rc geninfo_all_blocks=1 00:04:33.418 --rc geninfo_unexecuted_blocks=1 00:04:33.418 00:04:33.418 ' 00:04:33.418 08:49:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:33.418 08:49:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:33.418 08:49:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:33.418 08:49:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.418 08:49:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.418 ************************************ 00:04:33.418 START TEST default_locks 00:04:33.418 ************************************ 00:04:33.418 08:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:33.418 08:49:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=462903 00:04:33.418 08:49:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 462903 00:04:33.418 08:49:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.418 08:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 462903 ']' 00:04:33.418 08:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.418 08:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.418 08:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.418 08:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.418 08:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.678 [2024-11-20 08:49:58.964626] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:33.678 [2024-11-20 08:49:58.964689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462903 ] 00:04:33.678 [2024-11-20 08:49:59.050677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.678 [2024-11-20 08:49:59.085723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.248 08:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.248 08:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:34.248 08:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 462903 00:04:34.248 08:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 462903 00:04:34.248 08:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:34.820 lslocks: write error 00:04:34.820 08:50:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 462903 00:04:34.820 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 462903 ']' 00:04:34.820 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 462903 00:04:34.820 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:34.820 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.820 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 462903 00:04:34.820 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.820 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.820 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 462903' 00:04:34.820 killing process with pid 462903 00:04:34.820 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 462903 00:04:34.820 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 462903 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 462903 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 462903 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 462903 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 462903 ']' 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (462903) - No such process 00:04:35.081 ERROR: process (pid: 462903) is no longer running 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:35.081 00:04:35.081 real 0m1.508s 00:04:35.081 user 0m1.624s 00:04:35.081 sys 0m0.534s 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.081 08:50:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.081 ************************************ 00:04:35.081 END TEST default_locks 00:04:35.081 ************************************ 00:04:35.082 08:50:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:35.082 08:50:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.082 08:50:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.082 08:50:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.082 ************************************ 00:04:35.082 START TEST default_locks_via_rpc 00:04:35.082 ************************************ 00:04:35.082 08:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:35.082 08:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=463200 00:04:35.082 08:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 463200 00:04:35.082 08:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.082 08:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 463200 ']' 00:04:35.082 08:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.082 08:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.082 08:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.082 08:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.082 08:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.082 [2024-11-20 08:50:00.557236] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:35.082 [2024-11-20 08:50:00.557296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463200 ] 00:04:35.342 [2024-11-20 08:50:00.643343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.342 [2024-11-20 08:50:00.677079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 463200 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 463200 00:04:35.913 08:50:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.174 08:50:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 463200 00:04:36.174 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 463200 ']' 00:04:36.174 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 463200 00:04:36.174 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:36.174 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.174 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463200 00:04:36.174 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.174 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.174 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463200' 00:04:36.174 killing process with pid 463200 00:04:36.174 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 463200 00:04:36.174 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 463200 00:04:36.434 00:04:36.434 real 0m1.297s 00:04:36.434 user 0m1.420s 00:04:36.434 sys 0m0.424s 00:04:36.434 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.434 08:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.434 ************************************ 00:04:36.434 END TEST default_locks_via_rpc 00:04:36.434 ************************************ 00:04:36.434 08:50:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:36.434 08:50:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.434 08:50:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.434 08:50:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.434 ************************************ 00:04:36.434 START TEST non_locking_app_on_locked_coremask 00:04:36.434 ************************************ 00:04:36.434 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:36.435 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=463406 00:04:36.435 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 463406 /var/tmp/spdk.sock 00:04:36.435 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.435 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 463406 ']' 00:04:36.435 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.435 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.435 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.435 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.435 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.435 [2024-11-20 08:50:01.921772] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:36.435 [2024-11-20 08:50:01.921834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463406 ] 00:04:36.695 [2024-11-20 08:50:02.009181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.695 [2024-11-20 08:50:02.049880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.267 08:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.267 08:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:37.267 08:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=463646 00:04:37.267 08:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:37.267 08:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 463646 /var/tmp/spdk2.sock 00:04:37.267 08:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 463646 ']' 00:04:37.267 08:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:37.267 08:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.267 08:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:37.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:37.267 08:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.267 08:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.267 [2024-11-20 08:50:02.774130] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:37.267 [2024-11-20 08:50:02.774189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463646 ] 00:04:37.528 [2024-11-20 08:50:02.862001] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:37.528 [2024-11-20 08:50:02.862027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.528 [2024-11-20 08:50:02.924183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.100 08:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.100 08:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.100 08:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 463406 00:04:38.100 08:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 463406 00:04:38.100 08:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.670 lslocks: write error 00:04:38.670 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 463406 00:04:38.670 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 463406 ']' 00:04:38.670 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 463406 00:04:38.670 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:38.670 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.670 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463406 00:04:38.931 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.931 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.931 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463406' 00:04:38.931 killing process with pid 463406 00:04:38.931 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 463406 00:04:38.931 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 463406 00:04:39.191 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 463646 00:04:39.191 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 463646 ']' 00:04:39.191 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 463646 00:04:39.191 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.191 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.191 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463646 00:04:39.191 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.191 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.191 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463646' 00:04:39.191 killing process with pid 463646 00:04:39.191 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 463646 00:04:39.191 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 463646 00:04:39.452 00:04:39.452 real 0m2.975s 00:04:39.452 user 0m3.324s 00:04:39.452 sys 0m0.903s 00:04:39.452 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.452 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.452 ************************************ 00:04:39.452 END TEST non_locking_app_on_locked_coremask 00:04:39.452 ************************************ 00:04:39.452 08:50:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:39.452 08:50:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.452 08:50:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.452 08:50:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.452 ************************************ 00:04:39.452 START TEST locking_app_on_unlocked_coremask 00:04:39.452 ************************************ 00:04:39.452 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:39.452 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=464028 00:04:39.452 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 464028 /var/tmp/spdk.sock 00:04:39.452 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:39.452 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 464028 ']' 00:04:39.452 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.452 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.452 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.452 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.452 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.452 [2024-11-20 08:50:04.966724] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:39.452 [2024-11-20 08:50:04.966778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464028 ] 00:04:39.713 [2024-11-20 08:50:05.053684] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:39.713 [2024-11-20 08:50:05.053714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.713 [2024-11-20 08:50:05.088284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.283 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.283 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:40.283 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=464349 00:04:40.283 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 464349 /var/tmp/spdk2.sock 00:04:40.283 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 464349 ']' 00:04:40.283 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:40.283 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.283 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.283 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.283 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.283 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.543 [2024-11-20 08:50:05.810604] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:40.543 [2024-11-20 08:50:05.810660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464349 ] 00:04:40.543 [2024-11-20 08:50:05.898171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.543 [2024-11-20 08:50:05.956421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.113 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.113 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:41.113 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 464349 00:04:41.113 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 464349 00:04:41.113 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.685 lslocks: write error 00:04:41.685 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 464028 00:04:41.685 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 464028 ']' 00:04:41.685 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 464028 00:04:41.685 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:41.685 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.685 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464028 00:04:41.685 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.685 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.686 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464028' 00:04:41.686 killing process with pid 464028 00:04:41.686 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 464028 00:04:41.686 08:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 464028 00:04:41.946 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 464349 00:04:41.946 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 464349 ']' 00:04:41.946 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 464349 00:04:41.946 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:41.946 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.946 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464349 00:04:41.946 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.946 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.946 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464349' 00:04:41.946 killing process with pid 464349 00:04:41.946 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 464349 00:04:41.946 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 464349 00:04:42.206 00:04:42.206 real 0m2.674s 00:04:42.206 user 0m2.978s 00:04:42.206 sys 0m0.823s 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.206 ************************************ 00:04:42.206 END TEST locking_app_on_unlocked_coremask 00:04:42.206 ************************************ 00:04:42.206 08:50:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:42.206 08:50:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.206 08:50:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.206 08:50:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.206 ************************************ 00:04:42.206 START TEST locking_app_on_locked_coremask 00:04:42.206 ************************************ 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=464726 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 464726 /var/tmp/spdk.sock 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 464726 ']' 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.206 08:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.206 [2024-11-20 08:50:07.716182] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:42.206 [2024-11-20 08:50:07.716230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464726 ] 00:04:42.467 [2024-11-20 08:50:07.797717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.467 [2024-11-20 08:50:07.827720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=464772 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 464772 /var/tmp/spdk2.sock 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 464772 /var/tmp/spdk2.sock 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 464772 /var/tmp/spdk2.sock 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 464772 ']' 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.038 08:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.038 [2024-11-20 08:50:08.549564] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:43.038 [2024-11-20 08:50:08.549619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464772 ] 00:04:43.299 [2024-11-20 08:50:08.633504] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 464726 has claimed it. 00:04:43.299 [2024-11-20 08:50:08.633539] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:43.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (464772) - No such process 00:04:43.871 ERROR: process (pid: 464772) is no longer running 00:04:43.871 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.871 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:43.871 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:43.871 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.871 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:43.871 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.871 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 464726 00:04:43.871 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 464726 00:04:43.871 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.132 lslocks: write error 00:04:44.132 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 464726 00:04:44.132 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 464726 ']' 00:04:44.132 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 464726 00:04:44.132 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:44.132 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.132 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464726 00:04:44.132 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.132 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.132 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464726' 00:04:44.132 killing process with pid 464726 00:04:44.132 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 464726 00:04:44.132 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 464726 00:04:44.393 00:04:44.393 real 0m2.087s 00:04:44.393 user 0m2.371s 00:04:44.393 sys 0m0.562s 00:04:44.393 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.393 08:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.393 ************************************ 00:04:44.393 END TEST locking_app_on_locked_coremask 00:04:44.393 ************************************ 00:04:44.393 08:50:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:44.393 08:50:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.393 08:50:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.393 08:50:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.393 ************************************ 00:04:44.393 START TEST locking_overlapped_coremask 00:04:44.393 ************************************ 00:04:44.393 08:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:44.393 08:50:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=465104 00:04:44.393 08:50:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 465104 /var/tmp/spdk.sock 00:04:44.393 08:50:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:44.393 08:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 465104 ']' 00:04:44.393 08:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.393 08:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.393 08:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.393 08:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.393 08:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.393 [2024-11-20 08:50:09.880461] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:44.393 [2024-11-20 08:50:09.880518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465104 ] 00:04:44.653 [2024-11-20 08:50:09.965665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:44.653 [2024-11-20 08:50:10.004344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.653 [2024-11-20 08:50:10.004560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.653 [2024-11-20 08:50:10.004560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=465409 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 465409 /var/tmp/spdk2.sock 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 465409 /var/tmp/spdk2.sock 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 465409 /var/tmp/spdk2.sock 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 465409 ']' 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.225 08:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.225 [2024-11-20 08:50:10.733408] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:45.225 [2024-11-20 08:50:10.733461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465409 ] 00:04:45.486 [2024-11-20 08:50:10.845384] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 465104 has claimed it. 00:04:45.486 [2024-11-20 08:50:10.845426] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:46.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (465409) - No such process 00:04:46.057 ERROR: process (pid: 465409) is no longer running 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 465104 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 465104 ']' 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 465104 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465104 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465104' 00:04:46.057 killing process with pid 465104 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 465104 00:04:46.057 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 465104 00:04:46.317 00:04:46.317 real 0m1.773s 00:04:46.317 user 0m5.128s 00:04:46.317 sys 0m0.381s 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.317 ************************************ 00:04:46.317 END TEST locking_overlapped_coremask 00:04:46.317 ************************************ 00:04:46.317 08:50:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:46.317 08:50:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.317 08:50:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.317 08:50:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.317 ************************************ 00:04:46.317 START TEST locking_overlapped_coremask_via_rpc 00:04:46.317 ************************************ 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=465482 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 465482 /var/tmp/spdk.sock 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 465482 ']' 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.317 08:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.317 [2024-11-20 08:50:11.726270] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:46.317 [2024-11-20 08:50:11.726320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465482 ] 00:04:46.317 [2024-11-20 08:50:11.811062] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.317 [2024-11-20 08:50:11.811089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.578 [2024-11-20 08:50:11.844775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.578 [2024-11-20 08:50:11.844928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.578 [2024-11-20 08:50:11.844928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.151 08:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.151 08:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:47.151 08:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=465816 00:04:47.151 08:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 465816 /var/tmp/spdk2.sock 00:04:47.151 08:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 465816 ']' 00:04:47.151 08:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:47.151 08:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.151 08:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.151 08:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.151 08:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.151 08:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.151 [2024-11-20 08:50:12.582115] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:47.151 [2024-11-20 08:50:12.582177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465816 ] 00:04:47.413 [2024-11-20 08:50:12.695522] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.413 [2024-11-20 08:50:12.695554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:47.413 [2024-11-20 08:50:12.773285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.413 [2024-11-20 08:50:12.773442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.413 [2024-11-20 08:50:12.773443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.985 [2024-11-20 08:50:13.382241] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 465482 has claimed it. 00:04:47.985 request: 00:04:47.985 { 00:04:47.985 "method": "framework_enable_cpumask_locks", 00:04:47.985 "req_id": 1 00:04:47.985 } 00:04:47.985 Got JSON-RPC error response 00:04:47.985 response: 00:04:47.985 { 00:04:47.985 "code": -32603, 00:04:47.985 "message": "Failed to claim CPU core: 2" 00:04:47.985 } 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 465482 /var/tmp/spdk.sock 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 465482 ']' 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.985 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 465816 /var/tmp/spdk2.sock 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 465816 ']' 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:48.246 00:04:48.246 real 0m2.086s 00:04:48.246 user 0m0.871s 00:04:48.246 sys 0m0.144s 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.246 08:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.246 ************************************ 00:04:48.246 END TEST locking_overlapped_coremask_via_rpc 00:04:48.246 ************************************ 00:04:48.507 08:50:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:48.507 08:50:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 465482 ]] 00:04:48.507 08:50:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 465482 00:04:48.507 08:50:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 465482 ']' 00:04:48.507 08:50:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 465482 00:04:48.507 08:50:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:48.507 08:50:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.507 08:50:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465482 00:04:48.507 08:50:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.507 08:50:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.507 08:50:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465482' 00:04:48.507 killing process with pid 465482 00:04:48.507 08:50:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 465482 00:04:48.507 08:50:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 465482 00:04:48.768 08:50:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 465816 ]] 00:04:48.768 08:50:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 465816 00:04:48.768 08:50:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 465816 ']' 00:04:48.768 08:50:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 465816 00:04:48.768 08:50:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:48.768 08:50:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.768 08:50:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465816 00:04:48.768 08:50:14 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:48.768 08:50:14 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:48.768 08:50:14 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465816' 00:04:48.768 killing process with pid 465816 00:04:48.768 08:50:14 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 465816 00:04:48.768 08:50:14 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 465816 00:04:49.045 08:50:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.045 08:50:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:49.045 08:50:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 465482 ]] 00:04:49.045 08:50:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 465482 00:04:49.045 08:50:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 465482 ']' 00:04:49.045 08:50:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 465482 00:04:49.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (465482) - No such process 00:04:49.045 08:50:14 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 465482 is not found' 00:04:49.045 Process with pid 465482 is not found 00:04:49.045 08:50:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 465816 ]] 00:04:49.045 08:50:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 465816 00:04:49.045 08:50:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 465816 ']' 00:04:49.045 08:50:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 465816 00:04:49.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (465816) - No such process 00:04:49.045 08:50:14 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 465816 is not found' 00:04:49.045 Process with pid 465816 is not found 00:04:49.045 08:50:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.045 00:04:49.045 real 0m15.652s 00:04:49.045 user 0m27.713s 00:04:49.045 sys 0m4.741s 00:04:49.045 08:50:14 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.045 08:50:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.045 ************************************ 00:04:49.045 END TEST cpu_locks 00:04:49.045 ************************************ 00:04:49.045 00:04:49.045 real 0m41.621s 00:04:49.045 user 1m22.853s 00:04:49.045 sys 0m8.144s 00:04:49.046 08:50:14 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.046 08:50:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.046 ************************************ 00:04:49.046 END TEST event 00:04:49.046 ************************************ 00:04:49.046 08:50:14 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.046 08:50:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.046 08:50:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.046 08:50:14 -- common/autotest_common.sh@10 -- # set +x 00:04:49.046 ************************************ 00:04:49.046 START TEST thread 00:04:49.046 ************************************ 00:04:49.046 08:50:14 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.046 * Looking for test storage... 00:04:49.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:49.046 08:50:14 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.046 08:50:14 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.046 08:50:14 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.308 08:50:14 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.308 08:50:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.308 08:50:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.308 08:50:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.308 08:50:14 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.308 08:50:14 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.308 08:50:14 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.308 08:50:14 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.308 08:50:14 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.308 08:50:14 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.308 08:50:14 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.308 08:50:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.308 08:50:14 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:49.308 08:50:14 thread -- scripts/common.sh@345 -- # : 1 00:04:49.308 08:50:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.308 08:50:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.308 08:50:14 thread -- scripts/common.sh@365 -- # decimal 1 00:04:49.308 08:50:14 thread -- scripts/common.sh@353 -- # local d=1 00:04:49.308 08:50:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.308 08:50:14 thread -- scripts/common.sh@355 -- # echo 1 00:04:49.308 08:50:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.308 08:50:14 thread -- scripts/common.sh@366 -- # decimal 2 00:04:49.308 08:50:14 thread -- scripts/common.sh@353 -- # local d=2 00:04:49.308 08:50:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.308 08:50:14 thread -- scripts/common.sh@355 -- # echo 2 00:04:49.308 08:50:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.308 08:50:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.308 08:50:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.308 08:50:14 thread -- scripts/common.sh@368 -- # return 0 00:04:49.308 08:50:14 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.308 08:50:14 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.308 --rc genhtml_branch_coverage=1 00:04:49.308 --rc genhtml_function_coverage=1 00:04:49.308 --rc genhtml_legend=1 00:04:49.308 --rc geninfo_all_blocks=1 00:04:49.308 --rc geninfo_unexecuted_blocks=1 00:04:49.308 00:04:49.308 ' 00:04:49.308 08:50:14 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.308 --rc genhtml_branch_coverage=1 00:04:49.308 --rc genhtml_function_coverage=1 00:04:49.308 --rc genhtml_legend=1 00:04:49.308 --rc geninfo_all_blocks=1 00:04:49.308 --rc geninfo_unexecuted_blocks=1 00:04:49.308 00:04:49.308 ' 00:04:49.308 08:50:14 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.308 --rc genhtml_branch_coverage=1 00:04:49.308 --rc genhtml_function_coverage=1 00:04:49.308 --rc genhtml_legend=1 00:04:49.308 --rc geninfo_all_blocks=1 00:04:49.308 --rc geninfo_unexecuted_blocks=1 00:04:49.308 00:04:49.308 ' 00:04:49.308 08:50:14 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.308 --rc genhtml_branch_coverage=1 00:04:49.308 --rc genhtml_function_coverage=1 00:04:49.308 --rc genhtml_legend=1 00:04:49.308 --rc geninfo_all_blocks=1 00:04:49.308 --rc geninfo_unexecuted_blocks=1 00:04:49.308 00:04:49.308 ' 00:04:49.308 08:50:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.308 08:50:14 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:49.308 08:50:14 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.308 08:50:14 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.308 ************************************ 00:04:49.308 START TEST thread_poller_perf 00:04:49.308 ************************************ 00:04:49.308 08:50:14 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.308 [2024-11-20 08:50:14.696025] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:49.308 [2024-11-20 08:50:14.696139] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466261 ] 00:04:49.308 [2024-11-20 08:50:14.781454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.308 [2024-11-20 08:50:14.812593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.308 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:50.691 [2024-11-20T07:50:16.220Z] ====================================== 00:04:50.691 [2024-11-20T07:50:16.220Z] busy:2408840572 (cyc) 00:04:50.691 [2024-11-20T07:50:16.220Z] total_run_count: 418000 00:04:50.691 [2024-11-20T07:50:16.220Z] tsc_hz: 2400000000 (cyc) 00:04:50.691 [2024-11-20T07:50:16.220Z] ====================================== 00:04:50.691 [2024-11-20T07:50:16.220Z] poller_cost: 5762 (cyc), 2400 (nsec) 00:04:50.691 00:04:50.691 real 0m1.171s 00:04:50.691 user 0m1.098s 00:04:50.691 sys 0m0.070s 00:04:50.691 08:50:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.691 08:50:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.691 ************************************ 00:04:50.691 END TEST thread_poller_perf 00:04:50.691 ************************************ 00:04:50.691 08:50:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.691 08:50:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:50.691 08:50:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.691 08:50:15 thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.691 ************************************ 00:04:50.691 START TEST thread_poller_perf 00:04:50.691 ************************************ 00:04:50.692 08:50:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.692 [2024-11-20 08:50:15.944051] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:50.692 [2024-11-20 08:50:15.944154] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466609 ] 00:04:50.692 [2024-11-20 08:50:16.032227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.692 [2024-11-20 08:50:16.064403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.692 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:51.632 [2024-11-20T07:50:17.161Z] ====================================== 00:04:51.632 [2024-11-20T07:50:17.161Z] busy:2401529810 (cyc) 00:04:51.632 [2024-11-20T07:50:17.161Z] total_run_count: 5565000 00:04:51.632 [2024-11-20T07:50:17.161Z] tsc_hz: 2400000000 (cyc) 00:04:51.632 [2024-11-20T07:50:17.161Z] ====================================== 00:04:51.632 [2024-11-20T07:50:17.161Z] poller_cost: 431 (cyc), 179 (nsec) 00:04:51.632 00:04:51.632 real 0m1.168s 00:04:51.632 user 0m1.087s 00:04:51.632 sys 0m0.077s 00:04:51.632 08:50:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.632 08:50:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.632 ************************************ 00:04:51.632 END TEST thread_poller_perf 00:04:51.632 ************************************ 00:04:51.632 08:50:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:51.632 00:04:51.632 real 0m2.695s 00:04:51.632 user 0m2.370s 00:04:51.632 sys 0m0.336s 00:04:51.632 08:50:17 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.632 08:50:17 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.632 ************************************ 00:04:51.632 END TEST thread 00:04:51.632 ************************************ 00:04:51.893 08:50:17 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:51.893 08:50:17 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:51.893 08:50:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.893 08:50:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.893 08:50:17 -- common/autotest_common.sh@10 -- # set +x 00:04:51.893 ************************************ 00:04:51.893 START TEST app_cmdline 00:04:51.893 ************************************ 00:04:51.893 08:50:17 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:51.893 * Looking for test storage... 00:04:51.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:51.893 08:50:17 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.893 08:50:17 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.893 08:50:17 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.893 08:50:17 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.893 08:50:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.893 08:50:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.893 08:50:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.893 08:50:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.893 08:50:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.894 08:50:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:51.894 08:50:17 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.894 08:50:17 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.894 --rc genhtml_branch_coverage=1 00:04:51.894 --rc genhtml_function_coverage=1 00:04:51.894 --rc genhtml_legend=1 00:04:51.894 --rc geninfo_all_blocks=1 00:04:51.894 --rc geninfo_unexecuted_blocks=1 00:04:51.894 00:04:51.894 ' 00:04:51.894 08:50:17 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.894 --rc genhtml_branch_coverage=1 00:04:51.894 --rc genhtml_function_coverage=1 00:04:51.894 --rc genhtml_legend=1 00:04:51.894 --rc geninfo_all_blocks=1 00:04:51.894 --rc geninfo_unexecuted_blocks=1 00:04:51.894 00:04:51.894 ' 00:04:51.894 08:50:17 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.894 --rc genhtml_branch_coverage=1 00:04:51.894 --rc genhtml_function_coverage=1 00:04:51.894 --rc genhtml_legend=1 00:04:51.894 --rc geninfo_all_blocks=1 00:04:51.894 --rc geninfo_unexecuted_blocks=1 00:04:51.894 00:04:51.894 ' 00:04:51.894 08:50:17 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.894 --rc genhtml_branch_coverage=1 00:04:51.894 --rc genhtml_function_coverage=1 00:04:51.894 --rc genhtml_legend=1 00:04:51.894 --rc geninfo_all_blocks=1 00:04:51.894 --rc geninfo_unexecuted_blocks=1 00:04:51.894 00:04:51.894 ' 00:04:51.894 08:50:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:51.894 08:50:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=466950 00:04:51.894 08:50:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 466950 00:04:51.894 08:50:17 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:51.894 08:50:17 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 466950 ']' 00:04:51.894 08:50:17 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.894 08:50:17 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.894 08:50:17 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.894 08:50:17 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.894 08:50:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:52.154 [2024-11-20 08:50:17.477408] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:04:52.154 [2024-11-20 08:50:17.477477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466950 ] 00:04:52.154 [2024-11-20 08:50:17.561773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.154 [2024-11-20 08:50:17.595342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:53.097 08:50:18 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:53.097 { 00:04:53.097 "version": "SPDK v25.01-pre git sha1 17ebaf46f", 00:04:53.097 "fields": { 00:04:53.097 "major": 25, 00:04:53.097 "minor": 1, 00:04:53.097 "patch": 0, 00:04:53.097 "suffix": "-pre", 00:04:53.097 "commit": "17ebaf46f" 00:04:53.097 } 00:04:53.097 } 00:04:53.097 08:50:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:53.097 08:50:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:53.097 08:50:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:53.097 08:50:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:53.097 08:50:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:53.097 08:50:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.097 08:50:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.097 08:50:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:53.097 08:50:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:53.097 08:50:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:53.097 08:50:18 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.358 request: 00:04:53.358 { 00:04:53.358 "method": "env_dpdk_get_mem_stats", 00:04:53.358 "req_id": 1 00:04:53.358 } 00:04:53.358 Got JSON-RPC error response 00:04:53.358 response: 00:04:53.358 { 00:04:53.358 "code": -32601, 00:04:53.358 "message": "Method not found" 00:04:53.359 } 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.359 08:50:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 466950 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 466950 ']' 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 466950 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466950 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466950' 00:04:53.359 killing process with pid 466950 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@973 -- # kill 466950 00:04:53.359 08:50:18 app_cmdline -- common/autotest_common.sh@978 -- # wait 466950 00:04:53.619 00:04:53.619 real 0m1.695s 00:04:53.619 user 0m2.037s 00:04:53.619 sys 0m0.444s 00:04:53.619 08:50:18 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.619 08:50:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.619 ************************************ 00:04:53.619 END TEST app_cmdline 00:04:53.619 ************************************ 00:04:53.619 08:50:18 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.619 08:50:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.619 08:50:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.619 08:50:18 -- common/autotest_common.sh@10 -- # set +x 00:04:53.619 ************************************ 00:04:53.620 START TEST version 00:04:53.620 ************************************ 00:04:53.620 08:50:18 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.620 * Looking for test storage... 00:04:53.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:53.620 08:50:19 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.620 08:50:19 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.620 08:50:19 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.894 08:50:19 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.894 08:50:19 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.894 08:50:19 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.894 08:50:19 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.894 08:50:19 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.894 08:50:19 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.894 08:50:19 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.894 08:50:19 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.894 08:50:19 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.894 08:50:19 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.894 08:50:19 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.894 08:50:19 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.894 08:50:19 version -- scripts/common.sh@344 -- # case "$op" in 00:04:53.894 08:50:19 version -- scripts/common.sh@345 -- # : 1 00:04:53.895 08:50:19 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.895 08:50:19 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.895 08:50:19 version -- scripts/common.sh@365 -- # decimal 1 00:04:53.895 08:50:19 version -- scripts/common.sh@353 -- # local d=1 00:04:53.895 08:50:19 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.895 08:50:19 version -- scripts/common.sh@355 -- # echo 1 00:04:53.895 08:50:19 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.895 08:50:19 version -- scripts/common.sh@366 -- # decimal 2 00:04:53.895 08:50:19 version -- scripts/common.sh@353 -- # local d=2 00:04:53.895 08:50:19 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.895 08:50:19 version -- scripts/common.sh@355 -- # echo 2 00:04:53.895 08:50:19 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.895 08:50:19 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.895 08:50:19 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.895 08:50:19 version -- scripts/common.sh@368 -- # return 0 00:04:53.895 08:50:19 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.895 08:50:19 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.895 --rc genhtml_branch_coverage=1 00:04:53.895 --rc genhtml_function_coverage=1 00:04:53.895 --rc genhtml_legend=1 00:04:53.895 --rc geninfo_all_blocks=1 00:04:53.895 --rc geninfo_unexecuted_blocks=1 00:04:53.895 00:04:53.895 ' 00:04:53.895 08:50:19 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.895 --rc genhtml_branch_coverage=1 00:04:53.895 --rc genhtml_function_coverage=1 00:04:53.895 --rc genhtml_legend=1 00:04:53.895 --rc geninfo_all_blocks=1 00:04:53.895 --rc geninfo_unexecuted_blocks=1 00:04:53.895 00:04:53.895 ' 00:04:53.895 08:50:19 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.895 --rc genhtml_branch_coverage=1 00:04:53.895 --rc genhtml_function_coverage=1 00:04:53.895 --rc genhtml_legend=1 00:04:53.895 --rc geninfo_all_blocks=1 00:04:53.895 --rc geninfo_unexecuted_blocks=1 00:04:53.895 00:04:53.895 ' 00:04:53.895 08:50:19 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.895 --rc genhtml_branch_coverage=1 00:04:53.895 --rc genhtml_function_coverage=1 00:04:53.895 --rc genhtml_legend=1 00:04:53.895 --rc geninfo_all_blocks=1 00:04:53.895 --rc geninfo_unexecuted_blocks=1 00:04:53.895 00:04:53.895 ' 00:04:53.895 08:50:19 version -- app/version.sh@17 -- # get_header_version major 00:04:53.895 08:50:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.895 08:50:19 version -- app/version.sh@14 -- # cut -f2 00:04:53.895 08:50:19 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.895 08:50:19 version -- app/version.sh@17 -- # major=25 00:04:53.895 08:50:19 version -- app/version.sh@18 -- # get_header_version minor 00:04:53.895 08:50:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.895 08:50:19 version -- app/version.sh@14 -- # cut -f2 00:04:53.895 08:50:19 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.895 08:50:19 version -- app/version.sh@18 -- # minor=1 00:04:53.895 08:50:19 version -- app/version.sh@19 -- # get_header_version patch 00:04:53.895 08:50:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.895 08:50:19 version -- app/version.sh@14 -- # cut -f2 00:04:53.895 08:50:19 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.895 08:50:19 version -- app/version.sh@19 -- # patch=0 00:04:53.895 08:50:19 version -- app/version.sh@20 -- # get_header_version suffix 00:04:53.895 08:50:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.895 08:50:19 version -- app/version.sh@14 -- # cut -f2 00:04:53.895 08:50:19 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.895 08:50:19 version -- app/version.sh@20 -- # suffix=-pre 00:04:53.895 08:50:19 version -- app/version.sh@22 -- # version=25.1 00:04:53.895 08:50:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:53.895 08:50:19 version -- app/version.sh@28 -- # version=25.1rc0 00:04:53.895 08:50:19 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:53.895 08:50:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:53.895 08:50:19 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:53.895 08:50:19 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:53.895 00:04:53.895 real 0m0.286s 00:04:53.895 user 0m0.171s 00:04:53.895 sys 0m0.164s 00:04:53.895 08:50:19 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.895 08:50:19 version -- common/autotest_common.sh@10 -- # set +x 00:04:53.895 ************************************ 00:04:53.895 END TEST version 00:04:53.895 ************************************ 00:04:53.895 08:50:19 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:53.895 08:50:19 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:53.895 08:50:19 -- spdk/autotest.sh@194 -- # uname -s 00:04:53.895 08:50:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:53.895 08:50:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:53.896 08:50:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:53.896 08:50:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:53.896 08:50:19 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:53.896 08:50:19 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:53.896 08:50:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.896 08:50:19 -- common/autotest_common.sh@10 -- # set +x 00:04:53.896 08:50:19 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:53.896 08:50:19 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:53.896 08:50:19 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:53.896 08:50:19 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:53.896 08:50:19 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:53.896 08:50:19 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:53.896 08:50:19 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:53.896 08:50:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:53.896 08:50:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.896 08:50:19 -- common/autotest_common.sh@10 -- # set +x 00:04:53.896 ************************************ 00:04:53.896 START TEST nvmf_tcp 00:04:53.896 ************************************ 00:04:53.896 08:50:19 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:54.161 * Looking for test storage... 00:04:54.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.161 08:50:19 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.161 --rc genhtml_branch_coverage=1 00:04:54.161 --rc genhtml_function_coverage=1 00:04:54.161 --rc genhtml_legend=1 00:04:54.161 --rc geninfo_all_blocks=1 00:04:54.161 --rc geninfo_unexecuted_blocks=1 00:04:54.161 00:04:54.161 ' 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.161 --rc genhtml_branch_coverage=1 00:04:54.161 --rc genhtml_function_coverage=1 00:04:54.161 --rc genhtml_legend=1 00:04:54.161 --rc geninfo_all_blocks=1 00:04:54.161 --rc geninfo_unexecuted_blocks=1 00:04:54.161 00:04:54.161 ' 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.161 --rc genhtml_branch_coverage=1 00:04:54.161 --rc genhtml_function_coverage=1 00:04:54.161 --rc genhtml_legend=1 00:04:54.161 --rc geninfo_all_blocks=1 00:04:54.161 --rc geninfo_unexecuted_blocks=1 00:04:54.161 00:04:54.161 ' 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.161 --rc genhtml_branch_coverage=1 00:04:54.161 --rc genhtml_function_coverage=1 00:04:54.161 --rc genhtml_legend=1 00:04:54.161 --rc geninfo_all_blocks=1 00:04:54.161 --rc geninfo_unexecuted_blocks=1 00:04:54.161 00:04:54.161 ' 00:04:54.161 08:50:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:54.161 08:50:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:54.161 08:50:19 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.161 08:50:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.161 ************************************ 00:04:54.161 START TEST nvmf_target_core 00:04:54.161 ************************************ 00:04:54.161 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:54.424 * Looking for test storage... 00:04:54.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.425 --rc genhtml_branch_coverage=1 00:04:54.425 --rc genhtml_function_coverage=1 00:04:54.425 --rc genhtml_legend=1 00:04:54.425 --rc geninfo_all_blocks=1 00:04:54.425 --rc geninfo_unexecuted_blocks=1 00:04:54.425 00:04:54.425 ' 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.425 --rc genhtml_branch_coverage=1 00:04:54.425 --rc genhtml_function_coverage=1 00:04:54.425 --rc genhtml_legend=1 00:04:54.425 --rc geninfo_all_blocks=1 00:04:54.425 --rc geninfo_unexecuted_blocks=1 00:04:54.425 00:04:54.425 ' 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.425 --rc genhtml_branch_coverage=1 00:04:54.425 --rc genhtml_function_coverage=1 00:04:54.425 --rc genhtml_legend=1 00:04:54.425 --rc geninfo_all_blocks=1 00:04:54.425 --rc geninfo_unexecuted_blocks=1 00:04:54.425 00:04:54.425 ' 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.425 --rc genhtml_branch_coverage=1 00:04:54.425 --rc genhtml_function_coverage=1 00:04:54.425 --rc genhtml_legend=1 00:04:54.425 --rc geninfo_all_blocks=1 00:04:54.425 --rc geninfo_unexecuted_blocks=1 00:04:54.425 00:04:54.425 ' 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.425 08:50:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:54.426 ************************************ 00:04:54.426 START TEST nvmf_abort 00:04:54.426 ************************************ 00:04:54.426 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.688 * Looking for test storage... 00:04:54.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.688 --rc genhtml_branch_coverage=1 00:04:54.688 --rc genhtml_function_coverage=1 00:04:54.688 --rc genhtml_legend=1 00:04:54.688 --rc geninfo_all_blocks=1 00:04:54.688 --rc geninfo_unexecuted_blocks=1 00:04:54.688 00:04:54.688 ' 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.688 --rc genhtml_branch_coverage=1 00:04:54.688 --rc genhtml_function_coverage=1 00:04:54.688 --rc genhtml_legend=1 00:04:54.688 --rc geninfo_all_blocks=1 00:04:54.688 --rc geninfo_unexecuted_blocks=1 00:04:54.688 00:04:54.688 ' 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.688 --rc genhtml_branch_coverage=1 00:04:54.688 --rc genhtml_function_coverage=1 00:04:54.688 --rc genhtml_legend=1 00:04:54.688 --rc geninfo_all_blocks=1 00:04:54.688 --rc geninfo_unexecuted_blocks=1 00:04:54.688 00:04:54.688 ' 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.688 --rc genhtml_branch_coverage=1 00:04:54.688 --rc genhtml_function_coverage=1 00:04:54.688 --rc genhtml_legend=1 00:04:54.688 --rc geninfo_all_blocks=1 00:04:54.688 --rc geninfo_unexecuted_blocks=1 00:04:54.688 00:04:54.688 ' 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.688 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:54.689 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:02.837 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:02.837 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:02.837 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:02.837 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:02.838 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:02.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:02.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:05:02.838 00:05:02.838 --- 10.0.0.2 ping statistics --- 00:05:02.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:02.838 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:02.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:02.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:05:02.838 00:05:02.838 --- 10.0.0.1 ping statistics --- 00:05:02.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:02.838 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=471363 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 471363 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 471363 ']' 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.838 08:50:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.838 [2024-11-20 08:50:27.802248] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:05:02.838 [2024-11-20 08:50:27.802312] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:02.838 [2024-11-20 08:50:27.903512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.838 [2024-11-20 08:50:27.957429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:02.838 [2024-11-20 08:50:27.957485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:02.838 [2024-11-20 08:50:27.957494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.838 [2024-11-20 08:50:27.957501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.838 [2024-11-20 08:50:27.957508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:02.838 [2024-11-20 08:50:27.959552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.838 [2024-11-20 08:50:27.959718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.838 [2024-11-20 08:50:27.959719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.440 [2024-11-20 08:50:28.678930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.440 Malloc0 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.440 Delay0 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.440 [2024-11-20 08:50:28.766108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.440 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:03.440 [2024-11-20 08:50:28.915927] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:05.476 Initializing NVMe Controllers 00:05:05.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:05.476 controller IO queue size 128 less than required 00:05:05.476 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:05.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:05.476 Initialization complete. Launching workers. 00:05:05.476 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28456 00:05:05.476 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28517, failed to submit 62 00:05:05.476 success 28460, unsuccessful 57, failed 0 00:05:05.476 08:50:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:05.476 08:50:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.476 08:50:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:05.736 rmmod nvme_tcp 00:05:05.736 rmmod nvme_fabrics 00:05:05.736 rmmod nvme_keyring 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 471363 ']' 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 471363 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 471363 ']' 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 471363 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 471363 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 471363' 00:05:05.736 killing process with pid 471363 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 471363 00:05:05.736 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 471363 00:05:05.995 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:05.995 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:05.995 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:05.995 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:05.995 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:05.995 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:05.995 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:05.995 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:05.995 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:05.995 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:05.996 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:05.996 08:50:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:07.910 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:07.910 00:05:07.910 real 0m13.440s 00:05:07.910 user 0m13.902s 00:05:07.910 sys 0m6.682s 00:05:07.910 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.910 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.910 ************************************ 00:05:07.910 END TEST nvmf_abort 00:05:07.910 ************************************ 00:05:07.910 08:50:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:07.910 08:50:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:07.910 08:50:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.910 08:50:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:08.171 ************************************ 00:05:08.171 START TEST nvmf_ns_hotplug_stress 00:05:08.171 ************************************ 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:08.171 * Looking for test storage... 00:05:08.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.171 --rc genhtml_branch_coverage=1 00:05:08.171 --rc genhtml_function_coverage=1 00:05:08.171 --rc genhtml_legend=1 00:05:08.171 --rc geninfo_all_blocks=1 00:05:08.171 --rc geninfo_unexecuted_blocks=1 00:05:08.171 00:05:08.171 ' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.171 --rc genhtml_branch_coverage=1 00:05:08.171 --rc genhtml_function_coverage=1 00:05:08.171 --rc genhtml_legend=1 00:05:08.171 --rc geninfo_all_blocks=1 00:05:08.171 --rc geninfo_unexecuted_blocks=1 00:05:08.171 00:05:08.171 ' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.171 --rc genhtml_branch_coverage=1 00:05:08.171 --rc genhtml_function_coverage=1 00:05:08.171 --rc genhtml_legend=1 00:05:08.171 --rc geninfo_all_blocks=1 00:05:08.171 --rc geninfo_unexecuted_blocks=1 00:05:08.171 00:05:08.171 ' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.171 --rc genhtml_branch_coverage=1 00:05:08.171 --rc genhtml_function_coverage=1 00:05:08.171 --rc genhtml_legend=1 00:05:08.171 --rc geninfo_all_blocks=1 00:05:08.171 --rc geninfo_unexecuted_blocks=1 00:05:08.171 00:05:08.171 ' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:08.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:08.171 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.311 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:16.312 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:16.312 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:16.312 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:16.312 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:16.312 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:16.312 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:16.312 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:16.312 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:16.312 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:16.312 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:16.312 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:16.312 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:16.312 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:16.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:16.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:05:16.312 00:05:16.313 --- 10.0.0.2 ping statistics --- 00:05:16.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.313 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:16.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:16.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:05:16.313 00:05:16.313 --- 10.0.0.1 ping statistics --- 00:05:16.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.313 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=476239 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 476239 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 476239 ']' 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.313 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.313 [2024-11-20 08:50:41.286452] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:05:16.313 [2024-11-20 08:50:41.286522] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:16.313 [2024-11-20 08:50:41.388334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.313 [2024-11-20 08:50:41.439425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:16.313 [2024-11-20 08:50:41.439476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:16.313 [2024-11-20 08:50:41.439485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.313 [2024-11-20 08:50:41.439493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.313 [2024-11-20 08:50:41.439499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:16.313 [2024-11-20 08:50:41.441339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.313 [2024-11-20 08:50:41.441501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.313 [2024-11-20 08:50:41.441502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.884 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.884 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:16.884 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:16.884 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.884 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.884 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:16.884 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:16.884 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:16.884 [2024-11-20 08:50:42.317229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.884 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:17.144 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:17.406 [2024-11-20 08:50:42.716313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:17.406 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:17.668 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:17.668 Malloc0 00:05:17.668 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:17.929 Delay0 00:05:17.929 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.191 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:18.452 NULL1 00:05:18.452 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:18.452 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=476927 00:05:18.452 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:18.452 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:18.452 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.712 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.972 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:18.972 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:18.972 true 00:05:18.972 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:18.972 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.233 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.494 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:19.494 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:19.494 true 00:05:19.755 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:19.755 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.755 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.016 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:20.016 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:20.276 true 00:05:20.276 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:20.276 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.276 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.538 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:20.538 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:20.798 true 00:05:20.798 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:20.798 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.798 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.058 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:21.058 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:21.320 true 00:05:21.320 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:21.320 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.581 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.581 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:21.581 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:21.840 true 00:05:21.840 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:21.840 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.101 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.101 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:22.101 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:22.360 true 00:05:22.360 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:22.360 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.621 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.621 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:22.621 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:22.883 true 00:05:22.883 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:22.883 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.143 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.404 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:23.404 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:23.404 true 00:05:23.404 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:23.404 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.664 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.925 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:23.925 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:23.925 true 00:05:23.925 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:23.925 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.185 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.446 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:24.446 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:24.446 true 00:05:24.446 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:24.446 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.708 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.971 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:24.971 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:24.971 true 00:05:25.231 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:25.231 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.231 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.492 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:25.492 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:25.753 true 00:05:25.753 08:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:25.753 08:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.753 08:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.014 08:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:26.014 08:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:26.275 true 00:05:26.275 08:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:26.275 08:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.535 08:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.535 08:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:26.535 08:50:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:26.795 true 00:05:26.795 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:26.795 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.054 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.054 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:27.055 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:27.315 true 00:05:27.315 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:27.315 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.576 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.576 08:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:27.576 08:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:27.837 true 00:05:27.837 08:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:27.837 08:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.098 08:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.358 08:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:28.358 08:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:28.358 true 00:05:28.358 08:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:28.358 08:50:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.619 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.880 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:28.880 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:28.880 true 00:05:28.880 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:28.880 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.141 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.401 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:29.401 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:29.401 true 00:05:29.401 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:29.401 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.662 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.923 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:29.923 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:29.923 true 00:05:30.183 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:30.183 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.183 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.444 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:30.444 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:30.444 true 00:05:30.705 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:30.705 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.705 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.967 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:30.967 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:31.228 true 00:05:31.228 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:31.228 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.228 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.488 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:31.488 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:31.747 true 00:05:31.748 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:31.748 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.748 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.008 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:32.008 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:32.267 true 00:05:32.267 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:32.267 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.527 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.527 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:32.527 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:32.786 true 00:05:32.786 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:32.786 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.046 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.046 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:33.046 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:33.305 true 00:05:33.305 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:33.305 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.563 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.823 08:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:33.823 08:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:33.823 true 00:05:33.823 08:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:33.823 08:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.082 08:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.343 08:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:34.343 08:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:34.343 true 00:05:34.343 08:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:34.343 08:50:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.604 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.863 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:34.863 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:34.863 true 00:05:34.863 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:34.863 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.124 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.384 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:35.384 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:35.384 true 00:05:35.644 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:35.644 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.644 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.905 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:35.905 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:36.166 true 00:05:36.166 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:36.166 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.166 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.426 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:36.426 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:36.686 true 00:05:36.686 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:36.686 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.947 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.947 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:36.947 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:37.208 true 00:05:37.208 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:37.208 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.468 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.468 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:37.468 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:37.729 true 00:05:37.729 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:37.729 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.989 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.989 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:37.989 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:38.249 true 00:05:38.249 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:38.249 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.508 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.768 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:38.768 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:38.768 true 00:05:38.768 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:38.768 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.028 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.289 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:39.289 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:39.289 true 00:05:39.289 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:39.289 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.551 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.811 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:39.811 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:39.811 true 00:05:40.071 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:40.071 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.071 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.331 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:40.331 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:40.591 true 00:05:40.591 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:40.591 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.591 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.852 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:40.852 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:41.112 true 00:05:41.112 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:41.112 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.112 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.373 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:41.373 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:41.633 true 00:05:41.633 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:41.633 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.894 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.894 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:41.894 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:42.155 true 00:05:42.155 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:42.155 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.416 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.416 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:42.416 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:42.677 true 00:05:42.677 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:42.677 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.938 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.938 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:42.938 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:43.197 true 00:05:43.197 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:43.197 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.458 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.719 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:43.719 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:43.719 true 00:05:43.719 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:43.719 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.979 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.241 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:44.241 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:44.241 true 00:05:44.241 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:44.241 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.502 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.762 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:44.762 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:44.762 true 00:05:44.762 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:44.762 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.024 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.285 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:45.285 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:45.285 true 00:05:45.582 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:45.582 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.582 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.908 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:05:45.908 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:05:45.908 true 00:05:45.908 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:45.908 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.225 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.225 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:05:46.225 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:05:46.485 true 00:05:46.485 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:46.485 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.746 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.007 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:05:47.007 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:05:47.007 true 00:05:47.007 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:47.007 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.268 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.529 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:05:47.529 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:05:47.529 true 00:05:47.529 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:47.529 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.789 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.050 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:05:48.050 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:05:48.050 true 00:05:48.050 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:48.050 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.311 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.571 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:05:48.571 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:05:48.571 true 00:05:48.832 08:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:48.832 08:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.832 Initializing NVMe Controllers 00:05:48.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:48.832 Controller IO queue size 128, less than required. 00:05:48.832 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:48.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:48.832 Initialization complete. Launching workers. 00:05:48.832 ======================================================== 00:05:48.832 Latency(us) 00:05:48.832 Device Information : IOPS MiB/s Average min max 00:05:48.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31050.20 15.16 4122.38 1112.83 7963.06 00:05:48.832 ======================================================== 00:05:48.832 Total : 31050.20 15.16 4122.38 1112.83 7963.06 00:05:48.832 00:05:48.832 08:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.095 08:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:05:49.095 08:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:05:49.355 true 00:05:49.355 08:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 476927 00:05:49.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (476927) - No such process 00:05:49.355 08:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 476927 00:05:49.355 08:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.355 08:51:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.616 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:49.616 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:49.616 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:49.616 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:49.616 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:49.879 null0 00:05:49.879 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:49.879 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:49.879 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:49.879 null1 00:05:49.879 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:49.879 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:49.879 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:50.139 null2 00:05:50.139 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.139 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.139 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:50.399 null3 00:05:50.399 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.399 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.399 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:50.399 null4 00:05:50.659 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.659 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.659 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:50.659 null5 00:05:50.659 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.659 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.659 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:50.919 null6 00:05:50.919 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.919 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.919 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:51.180 null7 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 484057 484058 484060 484062 484064 484066 484067 484069 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.180 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.441 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.702 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.963 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.224 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.487 08:51:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.487 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.487 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.487 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.748 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.011 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.272 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.533 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.533 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.533 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.533 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.533 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.533 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.533 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.533 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.533 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.533 08:51:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.795 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.056 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.317 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.582 08:51:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.582 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.582 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.582 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.582 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.582 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.582 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.842 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.843 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.843 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:54.843 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:54.843 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:54.843 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:54.843 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:54.843 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:54.843 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:54.843 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:54.843 rmmod nvme_tcp 00:05:54.843 rmmod nvme_fabrics 00:05:54.843 rmmod nvme_keyring 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 476239 ']' 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 476239 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 476239 ']' 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 476239 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 476239 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 476239' 00:05:55.103 killing process with pid 476239 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 476239 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 476239 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.103 08:51:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:57.648 00:05:57.648 real 0m49.200s 00:05:57.648 user 3m19.923s 00:05:57.648 sys 0m17.589s 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:57.648 ************************************ 00:05:57.648 END TEST nvmf_ns_hotplug_stress 00:05:57.648 ************************************ 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:57.648 ************************************ 00:05:57.648 START TEST nvmf_delete_subsystem 00:05:57.648 ************************************ 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:57.648 * Looking for test storage... 00:05:57.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.648 --rc genhtml_branch_coverage=1 00:05:57.648 --rc genhtml_function_coverage=1 00:05:57.648 --rc genhtml_legend=1 00:05:57.648 --rc geninfo_all_blocks=1 00:05:57.648 --rc geninfo_unexecuted_blocks=1 00:05:57.648 00:05:57.648 ' 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.648 --rc genhtml_branch_coverage=1 00:05:57.648 --rc genhtml_function_coverage=1 00:05:57.648 --rc genhtml_legend=1 00:05:57.648 --rc geninfo_all_blocks=1 00:05:57.648 --rc geninfo_unexecuted_blocks=1 00:05:57.648 00:05:57.648 ' 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.648 --rc genhtml_branch_coverage=1 00:05:57.648 --rc genhtml_function_coverage=1 00:05:57.648 --rc genhtml_legend=1 00:05:57.648 --rc geninfo_all_blocks=1 00:05:57.648 --rc geninfo_unexecuted_blocks=1 00:05:57.648 00:05:57.648 ' 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.648 --rc genhtml_branch_coverage=1 00:05:57.648 --rc genhtml_function_coverage=1 00:05:57.648 --rc genhtml_legend=1 00:05:57.648 --rc geninfo_all_blocks=1 00:05:57.648 --rc geninfo_unexecuted_blocks=1 00:05:57.648 00:05:57.648 ' 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.648 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:57.649 08:51:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:05.789 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:05.789 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:05.789 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:05.789 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:05.790 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:05.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:05.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:06:05.790 00:06:05.790 --- 10.0.0.2 ping statistics --- 00:06:05.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.790 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:05.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:05.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:06:05.790 00:06:05.790 --- 10.0.0.1 ping statistics --- 00:06:05.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.790 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=489247 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 489247 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 489247 ']' 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.790 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.790 [2024-11-20 08:51:30.479155] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:06:05.790 [2024-11-20 08:51:30.479226] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:05.790 [2024-11-20 08:51:30.578261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.790 [2024-11-20 08:51:30.631453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:05.790 [2024-11-20 08:51:30.631504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:05.790 [2024-11-20 08:51:30.631513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.790 [2024-11-20 08:51:30.631521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.790 [2024-11-20 08:51:30.631527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:05.790 [2024-11-20 08:51:30.636189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.790 [2024-11-20 08:51:30.636376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.790 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.790 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:05.790 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:05.790 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.790 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.052 [2024-11-20 08:51:31.344123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.052 [2024-11-20 08:51:31.368438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.052 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.053 NULL1 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.053 Delay0 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=489586 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:06.053 08:51:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:06.053 [2024-11-20 08:51:31.495386] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:07.966 08:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:07.966 08:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.966 08:51:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 starting I/O failed: -6 00:06:08.227 Write completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 starting I/O failed: -6 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Write completed with error (sct=0, sc=8) 00:06:08.227 starting I/O failed: -6 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Write completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 starting I/O failed: -6 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 starting I/O failed: -6 00:06:08.227 Write completed with error (sct=0, sc=8) 00:06:08.227 Write completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Write completed with error (sct=0, sc=8) 00:06:08.227 starting I/O failed: -6 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Write completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 starting I/O failed: -6 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Read completed with error (sct=0, sc=8) 00:06:08.227 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 [2024-11-20 08:51:33.740607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28960 is same with the state(6) to be set 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 starting I/O failed: -6 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 [2024-11-20 08:51:33.744393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7968000c40 is same with the state(6) to be set 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Write completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:08.228 Read completed with error (sct=0, sc=8) 00:06:09.612 [2024-11-20 08:51:34.717355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e299a0 is same with the state(6) to be set 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 [2024-11-20 08:51:34.744145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b40 is same with the state(6) to be set 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 [2024-11-20 08:51:34.744489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28780 is same with the state(6) to be set 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 [2024-11-20 08:51:34.746433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f796800d020 is same with the state(6) to be set 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Write completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 Read completed with error (sct=0, sc=8) 00:06:09.612 [2024-11-20 08:51:34.746627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f796800d7c0 is same with the state(6) to be set 00:06:09.612 Initializing NVMe Controllers 00:06:09.612 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:09.612 Controller IO queue size 128, less than required. 00:06:09.612 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:09.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:09.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:09.612 Initialization complete. Launching workers. 00:06:09.612 ======================================================== 00:06:09.612 Latency(us) 00:06:09.612 Device Information : IOPS MiB/s Average min max 00:06:09.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.87 0.08 903946.15 322.26 1006686.98 00:06:09.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.40 0.08 1022935.94 290.90 2001719.30 00:06:09.612 ======================================================== 00:06:09.612 Total : 324.27 0.16 962070.19 290.90 2001719.30 00:06:09.612 00:06:09.612 [2024-11-20 08:51:34.747126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e299a0 (9): Bad file descriptor 00:06:09.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:09.612 08:51:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.612 08:51:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:09.612 08:51:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 489586 00:06:09.612 08:51:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 489586 00:06:09.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (489586) - No such process 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 489586 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 489586 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 489586 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.873 [2024-11-20 08:51:35.278003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=490274 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 490274 00:06:09.873 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.873 [2024-11-20 08:51:35.377522] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:10.443 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:10.443 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 490274 00:06:10.443 08:51:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.013 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.013 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 490274 00:06:11.013 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.582 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.582 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 490274 00:06:11.582 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.843 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.843 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 490274 00:06:11.843 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:12.412 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:12.412 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 490274 00:06:12.412 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:12.984 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:12.984 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 490274 00:06:12.984 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:12.984 Initializing NVMe Controllers 00:06:12.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:12.984 Controller IO queue size 128, less than required. 00:06:12.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:12.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:12.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:12.984 Initialization complete. Launching workers. 00:06:12.984 ======================================================== 00:06:12.984 Latency(us) 00:06:12.984 Device Information : IOPS MiB/s Average min max 00:06:12.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001698.87 1000120.37 1006194.83 00:06:12.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002938.49 1000133.56 1007721.23 00:06:12.984 ======================================================== 00:06:12.984 Total : 256.00 0.12 1002318.68 1000120.37 1007721.23 00:06:12.984 00:06:13.555 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:13.555 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 490274 00:06:13.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (490274) - No such process 00:06:13.555 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 490274 00:06:13.555 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:13.555 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:13.555 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:13.555 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:13.556 rmmod nvme_tcp 00:06:13.556 rmmod nvme_fabrics 00:06:13.556 rmmod nvme_keyring 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 489247 ']' 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 489247 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 489247 ']' 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 489247 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 489247 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 489247' 00:06:13.556 killing process with pid 489247 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 489247 00:06:13.556 08:51:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 489247 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.556 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:16.102 00:06:16.102 real 0m18.423s 00:06:16.102 user 0m31.007s 00:06:16.102 sys 0m6.857s 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.102 ************************************ 00:06:16.102 END TEST nvmf_delete_subsystem 00:06:16.102 ************************************ 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:16.102 ************************************ 00:06:16.102 START TEST nvmf_host_management 00:06:16.102 ************************************ 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:16.102 * Looking for test storage... 00:06:16.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.102 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.103 --rc genhtml_branch_coverage=1 00:06:16.103 --rc genhtml_function_coverage=1 00:06:16.103 --rc genhtml_legend=1 00:06:16.103 --rc geninfo_all_blocks=1 00:06:16.103 --rc geninfo_unexecuted_blocks=1 00:06:16.103 00:06:16.103 ' 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.103 --rc genhtml_branch_coverage=1 00:06:16.103 --rc genhtml_function_coverage=1 00:06:16.103 --rc genhtml_legend=1 00:06:16.103 --rc geninfo_all_blocks=1 00:06:16.103 --rc geninfo_unexecuted_blocks=1 00:06:16.103 00:06:16.103 ' 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.103 --rc genhtml_branch_coverage=1 00:06:16.103 --rc genhtml_function_coverage=1 00:06:16.103 --rc genhtml_legend=1 00:06:16.103 --rc geninfo_all_blocks=1 00:06:16.103 --rc geninfo_unexecuted_blocks=1 00:06:16.103 00:06:16.103 ' 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.103 --rc genhtml_branch_coverage=1 00:06:16.103 --rc genhtml_function_coverage=1 00:06:16.103 --rc genhtml_legend=1 00:06:16.103 --rc geninfo_all_blocks=1 00:06:16.103 --rc geninfo_unexecuted_blocks=1 00:06:16.103 00:06:16.103 ' 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:16.103 08:51:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:24.244 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:24.244 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:24.244 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:24.245 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:24.245 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:24.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:06:24.245 00:06:24.245 --- 10.0.0.2 ping statistics --- 00:06:24.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.245 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:06:24.245 00:06:24.245 --- 10.0.0.1 ping statistics --- 00:06:24.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.245 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=495299 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 495299 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 495299 ']' 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.245 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.245 [2024-11-20 08:51:49.027196] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:06:24.245 [2024-11-20 08:51:49.027261] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.245 [2024-11-20 08:51:49.126492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.245 [2024-11-20 08:51:49.179673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.245 [2024-11-20 08:51:49.179729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.245 [2024-11-20 08:51:49.179737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.245 [2024-11-20 08:51:49.179745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.245 [2024-11-20 08:51:49.179751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.245 [2024-11-20 08:51:49.181754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.245 [2024-11-20 08:51:49.181920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.245 [2024-11-20 08:51:49.182081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.245 [2024-11-20 08:51:49.182081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.507 [2024-11-20 08:51:49.902153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.507 Malloc0 00:06:24.507 [2024-11-20 08:51:49.986292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.507 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.768 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=495584 00:06:24.768 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 495584 /var/tmp/bdevperf.sock 00:06:24.768 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 495584 ']' 00:06:24.768 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:24.768 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.768 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:24.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:24.768 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.769 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:24.769 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.769 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:24.769 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:24.769 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:24.769 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:24.769 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:24.769 { 00:06:24.769 "params": { 00:06:24.769 "name": "Nvme$subsystem", 00:06:24.769 "trtype": "$TEST_TRANSPORT", 00:06:24.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:24.769 "adrfam": "ipv4", 00:06:24.769 "trsvcid": "$NVMF_PORT", 00:06:24.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:24.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:24.769 "hdgst": ${hdgst:-false}, 00:06:24.769 "ddgst": ${ddgst:-false} 00:06:24.769 }, 00:06:24.769 "method": "bdev_nvme_attach_controller" 00:06:24.769 } 00:06:24.769 EOF 00:06:24.769 )") 00:06:24.769 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:24.769 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:24.769 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:24.769 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:24.769 "params": { 00:06:24.769 "name": "Nvme0", 00:06:24.769 "trtype": "tcp", 00:06:24.769 "traddr": "10.0.0.2", 00:06:24.769 "adrfam": "ipv4", 00:06:24.769 "trsvcid": "4420", 00:06:24.769 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:24.769 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:24.769 "hdgst": false, 00:06:24.769 "ddgst": false 00:06:24.769 }, 00:06:24.769 "method": "bdev_nvme_attach_controller" 00:06:24.769 }' 00:06:24.769 [2024-11-20 08:51:50.099021] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:06:24.769 [2024-11-20 08:51:50.099095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495584 ] 00:06:24.769 [2024-11-20 08:51:50.192879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.769 [2024-11-20 08:51:50.245927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.029 Running I/O for 10 seconds... 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.602 08:51:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.602 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=777 00:06:25.602 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 777 -ge 100 ']' 00:06:25.602 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:25.602 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:25.602 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:25.602 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:25.602 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.602 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.602 [2024-11-20 08:51:51.014980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.602 [2024-11-20 08:51:51.015296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864130 is same with the state(6) to be set 00:06:25.603 [2024-11-20 08:51:51.015677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.015989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.015996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.603 [2024-11-20 08:51:51.016314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.603 [2024-11-20 08:51:51.016321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.604 [2024-11-20 08:51:51.016922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.604 [2024-11-20 08:51:51.016931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7190 is same with the state(6) to be set 00:06:25.604 [2024-11-20 08:51:51.018241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:25.604 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.604 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:25.604 task offset: 115072 on job bdev=Nvme0n1 fails 00:06:25.604 00:06:25.604 Latency(us) 00:06:25.604 [2024-11-20T07:51:51.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:25.604 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:25.604 Job: Nvme0n1 ended in about 0.56 seconds with error 00:06:25.604 Verification LBA range: start 0x0 length 0x400 00:06:25.604 Nvme0n1 : 0.56 1514.30 94.64 113.35 0.00 38350.22 1966.08 35826.35 00:06:25.604 [2024-11-20T07:51:51.133Z] =================================================================================================================== 00:06:25.604 [2024-11-20T07:51:51.133Z] Total : 1514.30 94.64 113.35 0.00 38350.22 1966.08 35826.35 00:06:25.604 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.604 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.604 [2024-11-20 08:51:51.020500] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.604 [2024-11-20 08:51:51.020542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ae000 (9): Bad file descriptor 00:06:25.604 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.604 08:51:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:25.605 [2024-11-20 08:51:51.075821] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 495584 00:06:26.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (495584) - No such process 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:26.547 { 00:06:26.547 "params": { 00:06:26.547 "name": "Nvme$subsystem", 00:06:26.547 "trtype": "$TEST_TRANSPORT", 00:06:26.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:26.547 "adrfam": "ipv4", 00:06:26.547 "trsvcid": "$NVMF_PORT", 00:06:26.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:26.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:26.547 "hdgst": ${hdgst:-false}, 00:06:26.547 "ddgst": ${ddgst:-false} 00:06:26.547 }, 00:06:26.547 "method": "bdev_nvme_attach_controller" 00:06:26.547 } 00:06:26.547 EOF 00:06:26.547 )") 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:26.547 08:51:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:26.547 "params": { 00:06:26.547 "name": "Nvme0", 00:06:26.547 "trtype": "tcp", 00:06:26.547 "traddr": "10.0.0.2", 00:06:26.547 "adrfam": "ipv4", 00:06:26.547 "trsvcid": "4420", 00:06:26.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:26.547 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:26.547 "hdgst": false, 00:06:26.547 "ddgst": false 00:06:26.547 }, 00:06:26.547 "method": "bdev_nvme_attach_controller" 00:06:26.547 }' 00:06:26.808 [2024-11-20 08:51:52.090006] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:06:26.808 [2024-11-20 08:51:52.090057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid496020 ] 00:06:26.808 [2024-11-20 08:51:52.176567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.808 [2024-11-20 08:51:52.212217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.069 Running I/O for 1 seconds... 00:06:28.011 1728.00 IOPS, 108.00 MiB/s 00:06:28.012 Latency(us) 00:06:28.012 [2024-11-20T07:51:53.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:28.012 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:28.012 Verification LBA range: start 0x0 length 0x400 00:06:28.012 Nvme0n1 : 1.01 1768.61 110.54 0.00 0.00 35526.74 6417.07 32986.45 00:06:28.012 [2024-11-20T07:51:53.541Z] =================================================================================================================== 00:06:28.012 [2024-11-20T07:51:53.541Z] Total : 1768.61 110.54 0.00 0.00 35526.74 6417.07 32986.45 00:06:28.012 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:28.012 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:28.012 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:28.012 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:28.273 rmmod nvme_tcp 00:06:28.273 rmmod nvme_fabrics 00:06:28.273 rmmod nvme_keyring 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 495299 ']' 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 495299 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 495299 ']' 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 495299 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 495299 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 495299' 00:06:28.273 killing process with pid 495299 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 495299 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 495299 00:06:28.273 [2024-11-20 08:51:53.766970] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:28.273 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.535 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.535 08:51:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.449 08:51:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:30.449 08:51:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:30.449 00:06:30.449 real 0m14.647s 00:06:30.449 user 0m23.170s 00:06:30.449 sys 0m6.757s 00:06:30.449 08:51:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.449 08:51:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.449 ************************************ 00:06:30.449 END TEST nvmf_host_management 00:06:30.449 ************************************ 00:06:30.449 08:51:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:30.449 08:51:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.449 08:51:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.449 08:51:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.449 ************************************ 00:06:30.449 START TEST nvmf_lvol 00:06:30.449 ************************************ 00:06:30.449 08:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:30.710 * Looking for test storage... 00:06:30.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:30.710 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.711 --rc genhtml_branch_coverage=1 00:06:30.711 --rc genhtml_function_coverage=1 00:06:30.711 --rc genhtml_legend=1 00:06:30.711 --rc geninfo_all_blocks=1 00:06:30.711 --rc geninfo_unexecuted_blocks=1 00:06:30.711 00:06:30.711 ' 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.711 --rc genhtml_branch_coverage=1 00:06:30.711 --rc genhtml_function_coverage=1 00:06:30.711 --rc genhtml_legend=1 00:06:30.711 --rc geninfo_all_blocks=1 00:06:30.711 --rc geninfo_unexecuted_blocks=1 00:06:30.711 00:06:30.711 ' 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.711 --rc genhtml_branch_coverage=1 00:06:30.711 --rc genhtml_function_coverage=1 00:06:30.711 --rc genhtml_legend=1 00:06:30.711 --rc geninfo_all_blocks=1 00:06:30.711 --rc geninfo_unexecuted_blocks=1 00:06:30.711 00:06:30.711 ' 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.711 --rc genhtml_branch_coverage=1 00:06:30.711 --rc genhtml_function_coverage=1 00:06:30.711 --rc genhtml_legend=1 00:06:30.711 --rc geninfo_all_blocks=1 00:06:30.711 --rc geninfo_unexecuted_blocks=1 00:06:30.711 00:06:30.711 ' 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:30.711 08:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:38.857 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:38.857 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:38.857 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:38.857 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:38.858 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:38.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:06:38.858 00:06:38.858 --- 10.0.0.2 ping statistics --- 00:06:38.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.858 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:06:38.858 00:06:38.858 --- 10.0.0.1 ping statistics --- 00:06:38.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.858 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=500502 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 500502 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 500502 ']' 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.858 08:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.858 [2024-11-20 08:52:03.756057] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:06:38.858 [2024-11-20 08:52:03.756126] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.858 [2024-11-20 08:52:03.854432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.858 [2024-11-20 08:52:03.907089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.858 [2024-11-20 08:52:03.907144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.858 [2024-11-20 08:52:03.907152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.858 [2024-11-20 08:52:03.907171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.858 [2024-11-20 08:52:03.907179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.858 [2024-11-20 08:52:03.909014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.858 [2024-11-20 08:52:03.909199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.858 [2024-11-20 08:52:03.909259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.119 08:52:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.119 08:52:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:39.119 08:52:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:39.119 08:52:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.119 08:52:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:39.119 08:52:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.119 08:52:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:39.379 [2024-11-20 08:52:04.793259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.379 08:52:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:39.639 08:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:39.639 08:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:39.899 08:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:39.899 08:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:40.159 08:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:40.159 08:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ee26bd7c-66ec-41b8-9402-3a1e79d8f55d 00:06:40.159 08:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ee26bd7c-66ec-41b8-9402-3a1e79d8f55d lvol 20 00:06:40.419 08:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7fba13a9-cdd7-4a19-952a-b42ab3369739 00:06:40.419 08:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:40.679 08:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7fba13a9-cdd7-4a19-952a-b42ab3369739 00:06:40.938 08:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:40.938 [2024-11-20 08:52:06.422718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.938 08:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:41.199 08:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=501099 00:06:41.199 08:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:41.199 08:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:42.142 08:52:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7fba13a9-cdd7-4a19-952a-b42ab3369739 MY_SNAPSHOT 00:06:42.405 08:52:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4cedb036-a584-46cc-8cd6-fba5c0d9e4c0 00:06:42.405 08:52:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7fba13a9-cdd7-4a19-952a-b42ab3369739 30 00:06:42.761 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4cedb036-a584-46cc-8cd6-fba5c0d9e4c0 MY_CLONE 00:06:43.055 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e563ac4e-e4d0-475a-a4db-a2dabd3cd648 00:06:43.055 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e563ac4e-e4d0-475a-a4db-a2dabd3cd648 00:06:43.373 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 501099 00:06:51.510 Initializing NVMe Controllers 00:06:51.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:51.510 Controller IO queue size 128, less than required. 00:06:51.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:51.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:51.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:51.510 Initialization complete. Launching workers. 00:06:51.510 ======================================================== 00:06:51.510 Latency(us) 00:06:51.510 Device Information : IOPS MiB/s Average min max 00:06:51.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16277.80 63.59 7863.86 1524.23 47823.92 00:06:51.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17416.00 68.03 7348.98 1326.34 54677.16 00:06:51.510 ======================================================== 00:06:51.510 Total : 33693.80 131.62 7597.72 1326.34 54677.16 00:06:51.510 00:06:51.510 08:52:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:51.772 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7fba13a9-cdd7-4a19-952a-b42ab3369739 00:06:52.033 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ee26bd7c-66ec-41b8-9402-3a1e79d8f55d 00:06:52.033 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:52.033 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:52.033 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:52.033 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:52.033 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:52.033 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:52.033 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:52.033 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:52.033 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:52.033 rmmod nvme_tcp 00:06:52.033 rmmod nvme_fabrics 00:06:52.033 rmmod nvme_keyring 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 500502 ']' 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 500502 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 500502 ']' 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 500502 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 500502 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 500502' 00:06:52.294 killing process with pid 500502 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 500502 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 500502 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.294 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.836 08:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:54.836 00:06:54.836 real 0m23.896s 00:06:54.836 user 1m4.789s 00:06:54.836 sys 0m8.544s 00:06:54.836 08:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.836 08:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:54.836 ************************************ 00:06:54.836 END TEST nvmf_lvol 00:06:54.836 ************************************ 00:06:54.836 08:52:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:54.836 08:52:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.836 08:52:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.836 08:52:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:54.836 ************************************ 00:06:54.836 START TEST nvmf_lvs_grow 00:06:54.836 ************************************ 00:06:54.836 08:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:54.836 * Looking for test storage... 00:06:54.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.836 --rc genhtml_branch_coverage=1 00:06:54.836 --rc genhtml_function_coverage=1 00:06:54.836 --rc genhtml_legend=1 00:06:54.836 --rc geninfo_all_blocks=1 00:06:54.836 --rc geninfo_unexecuted_blocks=1 00:06:54.836 00:06:54.836 ' 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.836 --rc genhtml_branch_coverage=1 00:06:54.836 --rc genhtml_function_coverage=1 00:06:54.836 --rc genhtml_legend=1 00:06:54.836 --rc geninfo_all_blocks=1 00:06:54.836 --rc geninfo_unexecuted_blocks=1 00:06:54.836 00:06:54.836 ' 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.836 --rc genhtml_branch_coverage=1 00:06:54.836 --rc genhtml_function_coverage=1 00:06:54.836 --rc genhtml_legend=1 00:06:54.836 --rc geninfo_all_blocks=1 00:06:54.836 --rc geninfo_unexecuted_blocks=1 00:06:54.836 00:06:54.836 ' 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.836 --rc genhtml_branch_coverage=1 00:06:54.836 --rc genhtml_function_coverage=1 00:06:54.836 --rc genhtml_legend=1 00:06:54.836 --rc geninfo_all_blocks=1 00:06:54.836 --rc geninfo_unexecuted_blocks=1 00:06:54.836 00:06:54.836 ' 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.836 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:54.837 08:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:02.983 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:02.983 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:02.983 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:02.983 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.983 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:02.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:07:02.984 00:07:02.984 --- 10.0.0.2 ping statistics --- 00:07:02.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.984 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:07:02.984 00:07:02.984 --- 10.0.0.1 ping statistics --- 00:07:02.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.984 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=507491 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 507491 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 507491 ']' 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.984 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.984 [2024-11-20 08:52:27.693512] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:07:02.984 [2024-11-20 08:52:27.693578] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.984 [2024-11-20 08:52:27.794530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.984 [2024-11-20 08:52:27.846940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.984 [2024-11-20 08:52:27.846993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.984 [2024-11-20 08:52:27.847002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.984 [2024-11-20 08:52:27.847009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.984 [2024-11-20 08:52:27.847015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.984 [2024-11-20 08:52:27.847772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.246 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.246 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:03.246 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:03.246 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:03.246 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:03.246 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.246 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:03.246 [2024-11-20 08:52:28.718243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.246 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:03.246 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.246 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.246 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:03.507 ************************************ 00:07:03.507 START TEST lvs_grow_clean 00:07:03.507 ************************************ 00:07:03.508 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:03.508 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:03.508 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:03.508 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:03.508 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:03.508 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:03.508 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:03.508 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.508 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.508 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:03.508 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:03.508 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:03.769 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ad3a6c45-8931-43c8-8053-64a539387e37 00:07:03.769 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad3a6c45-8931-43c8-8053-64a539387e37 00:07:03.769 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:04.030 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:04.030 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:04.030 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ad3a6c45-8931-43c8-8053-64a539387e37 lvol 150 00:07:04.292 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=99479e84-56f3-49d0-8367-4efb27a05d24 00:07:04.292 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:04.292 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:04.292 [2024-11-20 08:52:29.751595] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:04.292 [2024-11-20 08:52:29.751669] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:04.292 true 00:07:04.292 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad3a6c45-8931-43c8-8053-64a539387e37 00:07:04.292 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:04.552 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:04.552 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:04.813 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 99479e84-56f3-49d0-8367-4efb27a05d24 00:07:04.813 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:05.073 [2024-11-20 08:52:30.477868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.073 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.334 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=508181 00:07:05.334 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:05.334 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:05.334 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 508181 /var/tmp/bdevperf.sock 00:07:05.334 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 508181 ']' 00:07:05.334 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:05.334 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.334 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:05.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:05.334 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.334 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:05.334 [2024-11-20 08:52:30.728509] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:07:05.334 [2024-11-20 08:52:30.728579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid508181 ] 00:07:05.334 [2024-11-20 08:52:30.825030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.595 [2024-11-20 08:52:30.876976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.168 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.168 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:06.168 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:06.429 Nvme0n1 00:07:06.429 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:06.689 [ 00:07:06.689 { 00:07:06.689 "name": "Nvme0n1", 00:07:06.689 "aliases": [ 00:07:06.689 "99479e84-56f3-49d0-8367-4efb27a05d24" 00:07:06.689 ], 00:07:06.689 "product_name": "NVMe disk", 00:07:06.689 "block_size": 4096, 00:07:06.689 "num_blocks": 38912, 00:07:06.689 "uuid": "99479e84-56f3-49d0-8367-4efb27a05d24", 00:07:06.689 "numa_id": 0, 00:07:06.689 "assigned_rate_limits": { 00:07:06.689 "rw_ios_per_sec": 0, 00:07:06.689 "rw_mbytes_per_sec": 0, 00:07:06.689 "r_mbytes_per_sec": 0, 00:07:06.689 "w_mbytes_per_sec": 0 00:07:06.689 }, 00:07:06.689 "claimed": false, 00:07:06.689 "zoned": false, 00:07:06.689 "supported_io_types": { 00:07:06.689 "read": true, 00:07:06.689 "write": true, 00:07:06.689 "unmap": true, 00:07:06.689 "flush": true, 00:07:06.689 "reset": true, 00:07:06.689 "nvme_admin": true, 00:07:06.689 "nvme_io": true, 00:07:06.689 "nvme_io_md": false, 00:07:06.689 "write_zeroes": true, 00:07:06.689 "zcopy": false, 00:07:06.689 "get_zone_info": false, 00:07:06.689 "zone_management": false, 00:07:06.689 "zone_append": false, 00:07:06.689 "compare": true, 00:07:06.689 "compare_and_write": true, 00:07:06.689 "abort": true, 00:07:06.689 "seek_hole": false, 00:07:06.689 "seek_data": false, 00:07:06.689 "copy": true, 00:07:06.689 "nvme_iov_md": false 00:07:06.689 }, 00:07:06.689 "memory_domains": [ 00:07:06.689 { 00:07:06.689 "dma_device_id": "system", 00:07:06.689 "dma_device_type": 1 00:07:06.689 } 00:07:06.689 ], 00:07:06.689 "driver_specific": { 00:07:06.689 "nvme": [ 00:07:06.689 { 00:07:06.689 "trid": { 00:07:06.689 "trtype": "TCP", 00:07:06.689 "adrfam": "IPv4", 00:07:06.689 "traddr": "10.0.0.2", 00:07:06.689 "trsvcid": "4420", 00:07:06.689 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:06.689 }, 00:07:06.689 "ctrlr_data": { 00:07:06.689 "cntlid": 1, 00:07:06.689 "vendor_id": "0x8086", 00:07:06.689 "model_number": "SPDK bdev Controller", 00:07:06.689 "serial_number": "SPDK0", 00:07:06.689 "firmware_revision": "25.01", 00:07:06.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:06.689 "oacs": { 00:07:06.689 "security": 0, 00:07:06.689 "format": 0, 00:07:06.689 "firmware": 0, 00:07:06.689 "ns_manage": 0 00:07:06.689 }, 00:07:06.689 "multi_ctrlr": true, 00:07:06.689 "ana_reporting": false 00:07:06.689 }, 00:07:06.689 "vs": { 00:07:06.689 "nvme_version": "1.3" 00:07:06.689 }, 00:07:06.689 "ns_data": { 00:07:06.689 "id": 1, 00:07:06.689 "can_share": true 00:07:06.689 } 00:07:06.689 } 00:07:06.689 ], 00:07:06.689 "mp_policy": "active_passive" 00:07:06.689 } 00:07:06.689 } 00:07:06.689 ] 00:07:06.689 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=508516 00:07:06.689 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:06.689 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:06.689 Running I/O for 10 seconds... 00:07:08.073 Latency(us) 00:07:08.073 [2024-11-20T07:52:33.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.074 Nvme0n1 : 1.00 25061.00 97.89 0.00 0.00 0.00 0.00 0.00 00:07:08.074 [2024-11-20T07:52:33.603Z] =================================================================================================================== 00:07:08.074 [2024-11-20T07:52:33.603Z] Total : 25061.00 97.89 0.00 0.00 0.00 0.00 0.00 00:07:08.074 00:07:08.644 08:52:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ad3a6c45-8931-43c8-8053-64a539387e37 00:07:08.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.905 Nvme0n1 : 2.00 25234.50 98.57 0.00 0.00 0.00 0.00 0.00 00:07:08.905 [2024-11-20T07:52:34.434Z] =================================================================================================================== 00:07:08.905 [2024-11-20T07:52:34.434Z] Total : 25234.50 98.57 0.00 0.00 0.00 0.00 0.00 00:07:08.905 00:07:08.905 true 00:07:08.905 08:52:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad3a6c45-8931-43c8-8053-64a539387e37 00:07:08.905 08:52:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:09.166 08:52:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:09.166 08:52:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:09.166 08:52:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 508516 00:07:09.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.737 Nvme0n1 : 3.00 25325.33 98.93 0.00 0.00 0.00 0.00 0.00 00:07:09.737 [2024-11-20T07:52:35.266Z] =================================================================================================================== 00:07:09.737 [2024-11-20T07:52:35.266Z] Total : 25325.33 98.93 0.00 0.00 0.00 0.00 0.00 00:07:09.737 00:07:10.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.679 Nvme0n1 : 4.00 25380.00 99.14 0.00 0.00 0.00 0.00 0.00 00:07:10.679 [2024-11-20T07:52:36.208Z] =================================================================================================================== 00:07:10.679 [2024-11-20T07:52:36.208Z] Total : 25380.00 99.14 0.00 0.00 0.00 0.00 0.00 00:07:10.679 00:07:12.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.064 Nvme0n1 : 5.00 25423.60 99.31 0.00 0.00 0.00 0.00 0.00 00:07:12.064 [2024-11-20T07:52:37.593Z] =================================================================================================================== 00:07:12.064 [2024-11-20T07:52:37.593Z] Total : 25423.60 99.31 0.00 0.00 0.00 0.00 0.00 00:07:12.064 00:07:13.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.005 Nvme0n1 : 6.00 25453.00 99.43 0.00 0.00 0.00 0.00 0.00 00:07:13.005 [2024-11-20T07:52:38.534Z] =================================================================================================================== 00:07:13.005 [2024-11-20T07:52:38.534Z] Total : 25453.00 99.43 0.00 0.00 0.00 0.00 0.00 00:07:13.005 00:07:13.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.946 Nvme0n1 : 7.00 25473.86 99.51 0.00 0.00 0.00 0.00 0.00 00:07:13.946 [2024-11-20T07:52:39.475Z] =================================================================================================================== 00:07:13.946 [2024-11-20T07:52:39.475Z] Total : 25473.86 99.51 0.00 0.00 0.00 0.00 0.00 00:07:13.946 00:07:14.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.889 Nvme0n1 : 8.00 25489.25 99.57 0.00 0.00 0.00 0.00 0.00 00:07:14.889 [2024-11-20T07:52:40.418Z] =================================================================================================================== 00:07:14.889 [2024-11-20T07:52:40.418Z] Total : 25489.25 99.57 0.00 0.00 0.00 0.00 0.00 00:07:14.889 00:07:15.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.830 Nvme0n1 : 9.00 25508.67 99.64 0.00 0.00 0.00 0.00 0.00 00:07:15.830 [2024-11-20T07:52:41.359Z] =================================================================================================================== 00:07:15.830 [2024-11-20T07:52:41.359Z] Total : 25508.67 99.64 0.00 0.00 0.00 0.00 0.00 00:07:15.830 00:07:16.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.772 Nvme0n1 : 10.00 25517.80 99.68 0.00 0.00 0.00 0.00 0.00 00:07:16.772 [2024-11-20T07:52:42.301Z] =================================================================================================================== 00:07:16.772 [2024-11-20T07:52:42.301Z] Total : 25517.80 99.68 0.00 0.00 0.00 0.00 0.00 00:07:16.772 00:07:16.772 00:07:16.772 Latency(us) 00:07:16.772 [2024-11-20T07:52:42.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.772 Nvme0n1 : 10.00 25519.65 99.69 0.00 0.00 5012.18 2525.87 15182.51 00:07:16.772 [2024-11-20T07:52:42.301Z] =================================================================================================================== 00:07:16.772 [2024-11-20T07:52:42.301Z] Total : 25519.65 99.69 0.00 0.00 5012.18 2525.87 15182.51 00:07:16.772 { 00:07:16.772 "results": [ 00:07:16.772 { 00:07:16.772 "job": "Nvme0n1", 00:07:16.772 "core_mask": "0x2", 00:07:16.772 "workload": "randwrite", 00:07:16.772 "status": "finished", 00:07:16.772 "queue_depth": 128, 00:07:16.772 "io_size": 4096, 00:07:16.772 "runtime": 10.004291, 00:07:16.772 "iops": 25519.64951839166, 00:07:16.772 "mibps": 99.68613093121742, 00:07:16.772 "io_failed": 0, 00:07:16.772 "io_timeout": 0, 00:07:16.772 "avg_latency_us": 5012.176015918153, 00:07:16.772 "min_latency_us": 2525.866666666667, 00:07:16.772 "max_latency_us": 15182.506666666666 00:07:16.772 } 00:07:16.772 ], 00:07:16.772 "core_count": 1 00:07:16.772 } 00:07:16.772 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 508181 00:07:16.772 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 508181 ']' 00:07:16.772 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 508181 00:07:16.772 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:16.772 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.772 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 508181 00:07:16.772 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:16.772 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:16.772 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 508181' 00:07:16.773 killing process with pid 508181 00:07:16.773 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 508181 00:07:16.773 Received shutdown signal, test time was about 10.000000 seconds 00:07:16.773 00:07:16.773 Latency(us) 00:07:16.773 [2024-11-20T07:52:42.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.773 [2024-11-20T07:52:42.302Z] =================================================================================================================== 00:07:16.773 [2024-11-20T07:52:42.302Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:16.773 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 508181 00:07:17.033 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.293 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:17.293 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad3a6c45-8931-43c8-8053-64a539387e37 00:07:17.293 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:17.554 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:17.554 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:17.554 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:17.814 [2024-11-20 08:52:43.106385] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad3a6c45-8931-43c8-8053-64a539387e37 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad3a6c45-8931-43c8-8053-64a539387e37 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:17.814 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad3a6c45-8931-43c8-8053-64a539387e37 00:07:17.814 request: 00:07:17.814 { 00:07:17.814 "uuid": "ad3a6c45-8931-43c8-8053-64a539387e37", 00:07:17.814 "method": "bdev_lvol_get_lvstores", 00:07:17.814 "req_id": 1 00:07:17.814 } 00:07:17.814 Got JSON-RPC error response 00:07:17.814 response: 00:07:17.814 { 00:07:17.814 "code": -19, 00:07:17.814 "message": "No such device" 00:07:17.814 } 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:18.075 aio_bdev 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 99479e84-56f3-49d0-8367-4efb27a05d24 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=99479e84-56f3-49d0-8367-4efb27a05d24 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:18.075 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:18.336 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 99479e84-56f3-49d0-8367-4efb27a05d24 -t 2000 00:07:18.597 [ 00:07:18.597 { 00:07:18.597 "name": "99479e84-56f3-49d0-8367-4efb27a05d24", 00:07:18.597 "aliases": [ 00:07:18.597 "lvs/lvol" 00:07:18.597 ], 00:07:18.597 "product_name": "Logical Volume", 00:07:18.597 "block_size": 4096, 00:07:18.597 "num_blocks": 38912, 00:07:18.597 "uuid": "99479e84-56f3-49d0-8367-4efb27a05d24", 00:07:18.597 "assigned_rate_limits": { 00:07:18.597 "rw_ios_per_sec": 0, 00:07:18.597 "rw_mbytes_per_sec": 0, 00:07:18.597 "r_mbytes_per_sec": 0, 00:07:18.597 "w_mbytes_per_sec": 0 00:07:18.597 }, 00:07:18.597 "claimed": false, 00:07:18.597 "zoned": false, 00:07:18.597 "supported_io_types": { 00:07:18.597 "read": true, 00:07:18.597 "write": true, 00:07:18.597 "unmap": true, 00:07:18.597 "flush": false, 00:07:18.597 "reset": true, 00:07:18.597 "nvme_admin": false, 00:07:18.597 "nvme_io": false, 00:07:18.597 "nvme_io_md": false, 00:07:18.597 "write_zeroes": true, 00:07:18.597 "zcopy": false, 00:07:18.597 "get_zone_info": false, 00:07:18.597 "zone_management": false, 00:07:18.597 "zone_append": false, 00:07:18.597 "compare": false, 00:07:18.597 "compare_and_write": false, 00:07:18.597 "abort": false, 00:07:18.597 "seek_hole": true, 00:07:18.597 "seek_data": true, 00:07:18.597 "copy": false, 00:07:18.597 "nvme_iov_md": false 00:07:18.597 }, 00:07:18.597 "driver_specific": { 00:07:18.597 "lvol": { 00:07:18.597 "lvol_store_uuid": "ad3a6c45-8931-43c8-8053-64a539387e37", 00:07:18.597 "base_bdev": "aio_bdev", 00:07:18.597 "thin_provision": false, 00:07:18.597 "num_allocated_clusters": 38, 00:07:18.597 "snapshot": false, 00:07:18.597 "clone": false, 00:07:18.597 "esnap_clone": false 00:07:18.597 } 00:07:18.597 } 00:07:18.597 } 00:07:18.597 ] 00:07:18.597 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:18.597 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad3a6c45-8931-43c8-8053-64a539387e37 00:07:18.597 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:18.597 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:18.597 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad3a6c45-8931-43c8-8053-64a539387e37 00:07:18.597 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:18.857 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:18.857 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 99479e84-56f3-49d0-8367-4efb27a05d24 00:07:19.117 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ad3a6c45-8931-43c8-8053-64a539387e37 00:07:19.117 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:19.377 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:19.377 00:07:19.377 real 0m15.995s 00:07:19.377 user 0m15.715s 00:07:19.377 sys 0m1.444s 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:19.378 ************************************ 00:07:19.378 END TEST lvs_grow_clean 00:07:19.378 ************************************ 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:19.378 ************************************ 00:07:19.378 START TEST lvs_grow_dirty 00:07:19.378 ************************************ 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:19.378 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:19.638 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:19.638 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:19.898 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:19.898 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:19.898 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:19.898 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:19.898 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:19.898 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a lvol 150 00:07:20.158 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=17fc9708-03ea-4a78-8d75-9693f31ae531 00:07:20.158 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:20.158 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:20.418 [2024-11-20 08:52:45.732829] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:20.418 [2024-11-20 08:52:45.732871] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:20.418 true 00:07:20.418 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:20.418 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:20.418 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:20.418 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:20.679 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 17fc9708-03ea-4a78-8d75-9693f31ae531 00:07:20.939 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:20.939 [2024-11-20 08:52:46.386720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.939 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.199 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=511364 00:07:21.200 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:21.200 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 511364 /var/tmp/bdevperf.sock 00:07:21.200 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:21.200 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 511364 ']' 00:07:21.200 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:21.200 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.200 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:21.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:21.200 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.200 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:21.200 [2024-11-20 08:52:46.627392] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:07:21.200 [2024-11-20 08:52:46.627445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511364 ] 00:07:21.200 [2024-11-20 08:52:46.708890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.460 [2024-11-20 08:52:46.738584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.031 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.031 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:22.032 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:22.292 Nvme0n1 00:07:22.292 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:22.552 [ 00:07:22.552 { 00:07:22.552 "name": "Nvme0n1", 00:07:22.552 "aliases": [ 00:07:22.552 "17fc9708-03ea-4a78-8d75-9693f31ae531" 00:07:22.552 ], 00:07:22.552 "product_name": "NVMe disk", 00:07:22.552 "block_size": 4096, 00:07:22.552 "num_blocks": 38912, 00:07:22.552 "uuid": "17fc9708-03ea-4a78-8d75-9693f31ae531", 00:07:22.552 "numa_id": 0, 00:07:22.552 "assigned_rate_limits": { 00:07:22.552 "rw_ios_per_sec": 0, 00:07:22.552 "rw_mbytes_per_sec": 0, 00:07:22.552 "r_mbytes_per_sec": 0, 00:07:22.552 "w_mbytes_per_sec": 0 00:07:22.552 }, 00:07:22.552 "claimed": false, 00:07:22.552 "zoned": false, 00:07:22.552 "supported_io_types": { 00:07:22.552 "read": true, 00:07:22.552 "write": true, 00:07:22.552 "unmap": true, 00:07:22.552 "flush": true, 00:07:22.552 "reset": true, 00:07:22.552 "nvme_admin": true, 00:07:22.552 "nvme_io": true, 00:07:22.552 "nvme_io_md": false, 00:07:22.552 "write_zeroes": true, 00:07:22.552 "zcopy": false, 00:07:22.552 "get_zone_info": false, 00:07:22.552 "zone_management": false, 00:07:22.552 "zone_append": false, 00:07:22.552 "compare": true, 00:07:22.552 "compare_and_write": true, 00:07:22.552 "abort": true, 00:07:22.552 "seek_hole": false, 00:07:22.552 "seek_data": false, 00:07:22.552 "copy": true, 00:07:22.552 "nvme_iov_md": false 00:07:22.552 }, 00:07:22.552 "memory_domains": [ 00:07:22.552 { 00:07:22.552 "dma_device_id": "system", 00:07:22.552 "dma_device_type": 1 00:07:22.552 } 00:07:22.552 ], 00:07:22.552 "driver_specific": { 00:07:22.552 "nvme": [ 00:07:22.552 { 00:07:22.552 "trid": { 00:07:22.552 "trtype": "TCP", 00:07:22.552 "adrfam": "IPv4", 00:07:22.552 "traddr": "10.0.0.2", 00:07:22.552 "trsvcid": "4420", 00:07:22.552 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:22.552 }, 00:07:22.552 "ctrlr_data": { 00:07:22.552 "cntlid": 1, 00:07:22.552 "vendor_id": "0x8086", 00:07:22.552 "model_number": "SPDK bdev Controller", 00:07:22.552 "serial_number": "SPDK0", 00:07:22.552 "firmware_revision": "25.01", 00:07:22.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:22.552 "oacs": { 00:07:22.552 "security": 0, 00:07:22.552 "format": 0, 00:07:22.552 "firmware": 0, 00:07:22.552 "ns_manage": 0 00:07:22.552 }, 00:07:22.552 "multi_ctrlr": true, 00:07:22.552 "ana_reporting": false 00:07:22.552 }, 00:07:22.552 "vs": { 00:07:22.552 "nvme_version": "1.3" 00:07:22.552 }, 00:07:22.552 "ns_data": { 00:07:22.552 "id": 1, 00:07:22.552 "can_share": true 00:07:22.552 } 00:07:22.552 } 00:07:22.552 ], 00:07:22.552 "mp_policy": "active_passive" 00:07:22.552 } 00:07:22.552 } 00:07:22.552 ] 00:07:22.552 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=511619 00:07:22.552 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:22.552 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:22.552 Running I/O for 10 seconds... 00:07:23.933 Latency(us) 00:07:23.933 [2024-11-20T07:52:49.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.934 Nvme0n1 : 1.00 25111.00 98.09 0.00 0.00 0.00 0.00 0.00 00:07:23.934 [2024-11-20T07:52:49.463Z] =================================================================================================================== 00:07:23.934 [2024-11-20T07:52:49.463Z] Total : 25111.00 98.09 0.00 0.00 0.00 0.00 0.00 00:07:23.934 00:07:24.504 08:52:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:24.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.765 Nvme0n1 : 2.00 25257.00 98.66 0.00 0.00 0.00 0.00 0.00 00:07:24.765 [2024-11-20T07:52:50.294Z] =================================================================================================================== 00:07:24.765 [2024-11-20T07:52:50.294Z] Total : 25257.00 98.66 0.00 0.00 0.00 0.00 0.00 00:07:24.765 00:07:24.765 true 00:07:24.765 08:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:24.765 08:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:25.025 08:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:25.025 08:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:25.025 08:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 511619 00:07:25.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.595 Nvme0n1 : 3.00 25349.67 99.02 0.00 0.00 0.00 0.00 0.00 00:07:25.595 [2024-11-20T07:52:51.124Z] =================================================================================================================== 00:07:25.595 [2024-11-20T07:52:51.124Z] Total : 25349.67 99.02 0.00 0.00 0.00 0.00 0.00 00:07:25.595 00:07:26.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.977 Nvme0n1 : 4.00 25396.25 99.20 0.00 0.00 0.00 0.00 0.00 00:07:26.977 [2024-11-20T07:52:52.506Z] =================================================================================================================== 00:07:26.977 [2024-11-20T07:52:52.506Z] Total : 25396.25 99.20 0.00 0.00 0.00 0.00 0.00 00:07:26.977 00:07:27.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.552 Nvme0n1 : 5.00 25436.00 99.36 0.00 0.00 0.00 0.00 0.00 00:07:27.552 [2024-11-20T07:52:53.081Z] =================================================================================================================== 00:07:27.552 [2024-11-20T07:52:53.081Z] Total : 25436.00 99.36 0.00 0.00 0.00 0.00 0.00 00:07:27.552 00:07:28.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.937 Nvme0n1 : 6.00 25463.33 99.47 0.00 0.00 0.00 0.00 0.00 00:07:28.937 [2024-11-20T07:52:54.466Z] =================================================================================================================== 00:07:28.937 [2024-11-20T07:52:54.466Z] Total : 25463.33 99.47 0.00 0.00 0.00 0.00 0.00 00:07:28.937 00:07:29.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.878 Nvme0n1 : 7.00 25482.86 99.54 0.00 0.00 0.00 0.00 0.00 00:07:29.878 [2024-11-20T07:52:55.407Z] =================================================================================================================== 00:07:29.878 [2024-11-20T07:52:55.407Z] Total : 25482.86 99.54 0.00 0.00 0.00 0.00 0.00 00:07:29.878 00:07:30.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.818 Nvme0n1 : 8.00 25505.12 99.63 0.00 0.00 0.00 0.00 0.00 00:07:30.818 [2024-11-20T07:52:56.347Z] =================================================================================================================== 00:07:30.818 [2024-11-20T07:52:56.347Z] Total : 25505.12 99.63 0.00 0.00 0.00 0.00 0.00 00:07:30.818 00:07:31.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.759 Nvme0n1 : 9.00 25522.22 99.70 0.00 0.00 0.00 0.00 0.00 00:07:31.759 [2024-11-20T07:52:57.288Z] =================================================================================================================== 00:07:31.759 [2024-11-20T07:52:57.288Z] Total : 25522.22 99.70 0.00 0.00 0.00 0.00 0.00 00:07:31.759 00:07:32.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.699 Nvme0n1 : 10.00 25536.40 99.75 0.00 0.00 0.00 0.00 0.00 00:07:32.699 [2024-11-20T07:52:58.228Z] =================================================================================================================== 00:07:32.699 [2024-11-20T07:52:58.228Z] Total : 25536.40 99.75 0.00 0.00 0.00 0.00 0.00 00:07:32.699 00:07:32.699 00:07:32.699 Latency(us) 00:07:32.699 [2024-11-20T07:52:58.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.699 Nvme0n1 : 10.00 25535.02 99.75 0.00 0.00 5009.54 3085.65 10103.47 00:07:32.699 [2024-11-20T07:52:58.228Z] =================================================================================================================== 00:07:32.699 [2024-11-20T07:52:58.228Z] Total : 25535.02 99.75 0.00 0.00 5009.54 3085.65 10103.47 00:07:32.699 { 00:07:32.699 "results": [ 00:07:32.699 { 00:07:32.699 "job": "Nvme0n1", 00:07:32.699 "core_mask": "0x2", 00:07:32.699 "workload": "randwrite", 00:07:32.699 "status": "finished", 00:07:32.699 "queue_depth": 128, 00:07:32.699 "io_size": 4096, 00:07:32.699 "runtime": 10.003006, 00:07:32.699 "iops": 25535.024171733978, 00:07:32.699 "mibps": 99.74618817083585, 00:07:32.699 "io_failed": 0, 00:07:32.699 "io_timeout": 0, 00:07:32.699 "avg_latency_us": 5009.535041688363, 00:07:32.699 "min_latency_us": 3085.653333333333, 00:07:32.699 "max_latency_us": 10103.466666666667 00:07:32.699 } 00:07:32.699 ], 00:07:32.699 "core_count": 1 00:07:32.699 } 00:07:32.699 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 511364 00:07:32.699 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 511364 ']' 00:07:32.699 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 511364 00:07:32.699 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:32.699 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.699 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 511364 00:07:32.699 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:32.699 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:32.699 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 511364' 00:07:32.699 killing process with pid 511364 00:07:32.699 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 511364 00:07:32.699 Received shutdown signal, test time was about 10.000000 seconds 00:07:32.699 00:07:32.700 Latency(us) 00:07:32.700 [2024-11-20T07:52:58.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.700 [2024-11-20T07:52:58.229Z] =================================================================================================================== 00:07:32.700 [2024-11-20T07:52:58.229Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:32.700 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 511364 00:07:32.960 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.960 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:33.220 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:33.220 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:33.480 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 507491 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 507491 00:07:33.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 507491 Killed "${NVMF_APP[@]}" "$@" 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=513962 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 513962 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 513962 ']' 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.481 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:33.481 [2024-11-20 08:52:58.922983] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:07:33.481 [2024-11-20 08:52:58.923040] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.739 [2024-11-20 08:52:59.016973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.739 [2024-11-20 08:52:59.047813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.739 [2024-11-20 08:52:59.047842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.739 [2024-11-20 08:52:59.047848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.739 [2024-11-20 08:52:59.047852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.739 [2024-11-20 08:52:59.047856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.739 [2024-11-20 08:52:59.048326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.344 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.344 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:34.344 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:34.344 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.344 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:34.344 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.344 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:34.627 [2024-11-20 08:52:59.902280] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:34.627 [2024-11-20 08:52:59.902355] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:34.627 [2024-11-20 08:52:59.902378] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:34.627 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:34.627 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 17fc9708-03ea-4a78-8d75-9693f31ae531 00:07:34.627 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=17fc9708-03ea-4a78-8d75-9693f31ae531 00:07:34.627 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:34.627 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:34.627 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:34.627 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:34.627 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:34.627 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 17fc9708-03ea-4a78-8d75-9693f31ae531 -t 2000 00:07:34.914 [ 00:07:34.914 { 00:07:34.914 "name": "17fc9708-03ea-4a78-8d75-9693f31ae531", 00:07:34.914 "aliases": [ 00:07:34.914 "lvs/lvol" 00:07:34.914 ], 00:07:34.914 "product_name": "Logical Volume", 00:07:34.914 "block_size": 4096, 00:07:34.914 "num_blocks": 38912, 00:07:34.914 "uuid": "17fc9708-03ea-4a78-8d75-9693f31ae531", 00:07:34.914 "assigned_rate_limits": { 00:07:34.914 "rw_ios_per_sec": 0, 00:07:34.914 "rw_mbytes_per_sec": 0, 00:07:34.914 "r_mbytes_per_sec": 0, 00:07:34.914 "w_mbytes_per_sec": 0 00:07:34.914 }, 00:07:34.914 "claimed": false, 00:07:34.914 "zoned": false, 00:07:34.914 "supported_io_types": { 00:07:34.914 "read": true, 00:07:34.914 "write": true, 00:07:34.914 "unmap": true, 00:07:34.914 "flush": false, 00:07:34.914 "reset": true, 00:07:34.914 "nvme_admin": false, 00:07:34.914 "nvme_io": false, 00:07:34.914 "nvme_io_md": false, 00:07:34.914 "write_zeroes": true, 00:07:34.914 "zcopy": false, 00:07:34.914 "get_zone_info": false, 00:07:34.914 "zone_management": false, 00:07:34.914 "zone_append": false, 00:07:34.914 "compare": false, 00:07:34.914 "compare_and_write": false, 00:07:34.914 "abort": false, 00:07:34.914 "seek_hole": true, 00:07:34.914 "seek_data": true, 00:07:34.914 "copy": false, 00:07:34.914 "nvme_iov_md": false 00:07:34.914 }, 00:07:34.914 "driver_specific": { 00:07:34.914 "lvol": { 00:07:34.914 "lvol_store_uuid": "84bc981d-8a85-42b2-9c3f-57d00b592d1a", 00:07:34.914 "base_bdev": "aio_bdev", 00:07:34.914 "thin_provision": false, 00:07:34.914 "num_allocated_clusters": 38, 00:07:34.914 "snapshot": false, 00:07:34.914 "clone": false, 00:07:34.914 "esnap_clone": false 00:07:34.914 } 00:07:34.914 } 00:07:34.915 } 00:07:34.915 ] 00:07:34.915 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:34.915 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:34.915 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:34.915 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:34.915 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:34.915 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:35.175 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:35.175 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:35.435 [2024-11-20 08:53:00.730860] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:35.436 request: 00:07:35.436 { 00:07:35.436 "uuid": "84bc981d-8a85-42b2-9c3f-57d00b592d1a", 00:07:35.436 "method": "bdev_lvol_get_lvstores", 00:07:35.436 "req_id": 1 00:07:35.436 } 00:07:35.436 Got JSON-RPC error response 00:07:35.436 response: 00:07:35.436 { 00:07:35.436 "code": -19, 00:07:35.436 "message": "No such device" 00:07:35.436 } 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.436 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:35.696 aio_bdev 00:07:35.696 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 17fc9708-03ea-4a78-8d75-9693f31ae531 00:07:35.696 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=17fc9708-03ea-4a78-8d75-9693f31ae531 00:07:35.696 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:35.696 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:35.696 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:35.696 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:35.696 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:35.956 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 17fc9708-03ea-4a78-8d75-9693f31ae531 -t 2000 00:07:35.957 [ 00:07:35.957 { 00:07:35.957 "name": "17fc9708-03ea-4a78-8d75-9693f31ae531", 00:07:35.957 "aliases": [ 00:07:35.957 "lvs/lvol" 00:07:35.957 ], 00:07:35.957 "product_name": "Logical Volume", 00:07:35.957 "block_size": 4096, 00:07:35.957 "num_blocks": 38912, 00:07:35.957 "uuid": "17fc9708-03ea-4a78-8d75-9693f31ae531", 00:07:35.957 "assigned_rate_limits": { 00:07:35.957 "rw_ios_per_sec": 0, 00:07:35.957 "rw_mbytes_per_sec": 0, 00:07:35.957 "r_mbytes_per_sec": 0, 00:07:35.957 "w_mbytes_per_sec": 0 00:07:35.957 }, 00:07:35.957 "claimed": false, 00:07:35.957 "zoned": false, 00:07:35.957 "supported_io_types": { 00:07:35.957 "read": true, 00:07:35.957 "write": true, 00:07:35.957 "unmap": true, 00:07:35.957 "flush": false, 00:07:35.957 "reset": true, 00:07:35.957 "nvme_admin": false, 00:07:35.957 "nvme_io": false, 00:07:35.957 "nvme_io_md": false, 00:07:35.957 "write_zeroes": true, 00:07:35.957 "zcopy": false, 00:07:35.957 "get_zone_info": false, 00:07:35.957 "zone_management": false, 00:07:35.957 "zone_append": false, 00:07:35.957 "compare": false, 00:07:35.957 "compare_and_write": false, 00:07:35.957 "abort": false, 00:07:35.957 "seek_hole": true, 00:07:35.957 "seek_data": true, 00:07:35.957 "copy": false, 00:07:35.957 "nvme_iov_md": false 00:07:35.957 }, 00:07:35.957 "driver_specific": { 00:07:35.957 "lvol": { 00:07:35.957 "lvol_store_uuid": "84bc981d-8a85-42b2-9c3f-57d00b592d1a", 00:07:35.957 "base_bdev": "aio_bdev", 00:07:35.957 "thin_provision": false, 00:07:35.957 "num_allocated_clusters": 38, 00:07:35.957 "snapshot": false, 00:07:35.957 "clone": false, 00:07:35.957 "esnap_clone": false 00:07:35.957 } 00:07:35.957 } 00:07:35.957 } 00:07:35.957 ] 00:07:35.957 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:35.957 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:35.957 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:36.217 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:36.217 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:36.217 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:36.477 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:36.477 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 17fc9708-03ea-4a78-8d75-9693f31ae531 00:07:36.477 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84bc981d-8a85-42b2-9c3f-57d00b592d1a 00:07:36.737 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:36.998 00:07:36.998 real 0m17.529s 00:07:36.998 user 0m46.078s 00:07:36.998 sys 0m2.950s 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.998 ************************************ 00:07:36.998 END TEST lvs_grow_dirty 00:07:36.998 ************************************ 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:36.998 nvmf_trace.0 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:36.998 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:36.998 rmmod nvme_tcp 00:07:37.259 rmmod nvme_fabrics 00:07:37.259 rmmod nvme_keyring 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 513962 ']' 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 513962 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 513962 ']' 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 513962 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513962 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513962' 00:07:37.259 killing process with pid 513962 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 513962 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 513962 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.259 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.798 08:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:39.798 00:07:39.798 real 0m44.889s 00:07:39.798 user 1m8.178s 00:07:39.798 sys 0m10.502s 00:07:39.798 08:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.798 08:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.798 ************************************ 00:07:39.798 END TEST nvmf_lvs_grow 00:07:39.798 ************************************ 00:07:39.798 08:53:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:39.798 08:53:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.798 08:53:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.798 08:53:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:39.798 ************************************ 00:07:39.798 START TEST nvmf_bdev_io_wait 00:07:39.798 ************************************ 00:07:39.798 08:53:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:39.798 * Looking for test storage... 00:07:39.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.798 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.798 --rc genhtml_branch_coverage=1 00:07:39.798 --rc genhtml_function_coverage=1 00:07:39.798 --rc genhtml_legend=1 00:07:39.798 --rc geninfo_all_blocks=1 00:07:39.798 --rc geninfo_unexecuted_blocks=1 00:07:39.798 00:07:39.799 ' 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.799 --rc genhtml_branch_coverage=1 00:07:39.799 --rc genhtml_function_coverage=1 00:07:39.799 --rc genhtml_legend=1 00:07:39.799 --rc geninfo_all_blocks=1 00:07:39.799 --rc geninfo_unexecuted_blocks=1 00:07:39.799 00:07:39.799 ' 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.799 --rc genhtml_branch_coverage=1 00:07:39.799 --rc genhtml_function_coverage=1 00:07:39.799 --rc genhtml_legend=1 00:07:39.799 --rc geninfo_all_blocks=1 00:07:39.799 --rc geninfo_unexecuted_blocks=1 00:07:39.799 00:07:39.799 ' 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.799 --rc genhtml_branch_coverage=1 00:07:39.799 --rc genhtml_function_coverage=1 00:07:39.799 --rc genhtml_legend=1 00:07:39.799 --rc geninfo_all_blocks=1 00:07:39.799 --rc geninfo_unexecuted_blocks=1 00:07:39.799 00:07:39.799 ' 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:39.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:39.799 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:47.938 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:47.938 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.938 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:47.939 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:47.939 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:47.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:07:47.939 00:07:47.939 --- 10.0.0.2 ping statistics --- 00:07:47.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.939 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:07:47.939 00:07:47.939 --- 10.0.0.1 ping statistics --- 00:07:47.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.939 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=519047 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 519047 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 519047 ']' 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.939 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.939 [2024-11-20 08:53:12.750743] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:07:47.939 [2024-11-20 08:53:12.750804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.939 [2024-11-20 08:53:12.848983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.939 [2024-11-20 08:53:12.903232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.939 [2024-11-20 08:53:12.903282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.939 [2024-11-20 08:53:12.903291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.939 [2024-11-20 08:53:12.903301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.939 [2024-11-20 08:53:12.903308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.939 [2024-11-20 08:53:12.905715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.939 [2024-11-20 08:53:12.905875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.939 [2024-11-20 08:53:12.906014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.939 [2024-11-20 08:53:12.906014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.201 [2024-11-20 08:53:13.701103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.201 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 Malloc0 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 [2024-11-20 08:53:13.766796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=519105 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=519107 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.464 { 00:07:48.464 "params": { 00:07:48.464 "name": "Nvme$subsystem", 00:07:48.464 "trtype": "$TEST_TRANSPORT", 00:07:48.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.464 "adrfam": "ipv4", 00:07:48.464 "trsvcid": "$NVMF_PORT", 00:07:48.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.464 "hdgst": ${hdgst:-false}, 00:07:48.464 "ddgst": ${ddgst:-false} 00:07:48.464 }, 00:07:48.464 "method": "bdev_nvme_attach_controller" 00:07:48.464 } 00:07:48.464 EOF 00:07:48.464 )") 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=519109 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.464 { 00:07:48.464 "params": { 00:07:48.464 "name": "Nvme$subsystem", 00:07:48.464 "trtype": "$TEST_TRANSPORT", 00:07:48.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.464 "adrfam": "ipv4", 00:07:48.464 "trsvcid": "$NVMF_PORT", 00:07:48.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.464 "hdgst": ${hdgst:-false}, 00:07:48.464 "ddgst": ${ddgst:-false} 00:07:48.464 }, 00:07:48.464 "method": "bdev_nvme_attach_controller" 00:07:48.464 } 00:07:48.464 EOF 00:07:48.464 )") 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=519112 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.464 { 00:07:48.464 "params": { 00:07:48.464 "name": "Nvme$subsystem", 00:07:48.464 "trtype": "$TEST_TRANSPORT", 00:07:48.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.464 "adrfam": "ipv4", 00:07:48.464 "trsvcid": "$NVMF_PORT", 00:07:48.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.464 "hdgst": ${hdgst:-false}, 00:07:48.464 "ddgst": ${ddgst:-false} 00:07:48.464 }, 00:07:48.464 "method": "bdev_nvme_attach_controller" 00:07:48.464 } 00:07:48.464 EOF 00:07:48.464 )") 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.464 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.464 { 00:07:48.464 "params": { 00:07:48.464 "name": "Nvme$subsystem", 00:07:48.464 "trtype": "$TEST_TRANSPORT", 00:07:48.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.464 "adrfam": "ipv4", 00:07:48.465 "trsvcid": "$NVMF_PORT", 00:07:48.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.465 "hdgst": ${hdgst:-false}, 00:07:48.465 "ddgst": ${ddgst:-false} 00:07:48.465 }, 00:07:48.465 "method": "bdev_nvme_attach_controller" 00:07:48.465 } 00:07:48.465 EOF 00:07:48.465 )") 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 519105 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.465 "params": { 00:07:48.465 "name": "Nvme1", 00:07:48.465 "trtype": "tcp", 00:07:48.465 "traddr": "10.0.0.2", 00:07:48.465 "adrfam": "ipv4", 00:07:48.465 "trsvcid": "4420", 00:07:48.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.465 "hdgst": false, 00:07:48.465 "ddgst": false 00:07:48.465 }, 00:07:48.465 "method": "bdev_nvme_attach_controller" 00:07:48.465 }' 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.465 "params": { 00:07:48.465 "name": "Nvme1", 00:07:48.465 "trtype": "tcp", 00:07:48.465 "traddr": "10.0.0.2", 00:07:48.465 "adrfam": "ipv4", 00:07:48.465 "trsvcid": "4420", 00:07:48.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.465 "hdgst": false, 00:07:48.465 "ddgst": false 00:07:48.465 }, 00:07:48.465 "method": "bdev_nvme_attach_controller" 00:07:48.465 }' 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.465 "params": { 00:07:48.465 "name": "Nvme1", 00:07:48.465 "trtype": "tcp", 00:07:48.465 "traddr": "10.0.0.2", 00:07:48.465 "adrfam": "ipv4", 00:07:48.465 "trsvcid": "4420", 00:07:48.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.465 "hdgst": false, 00:07:48.465 "ddgst": false 00:07:48.465 }, 00:07:48.465 "method": "bdev_nvme_attach_controller" 00:07:48.465 }' 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.465 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.465 "params": { 00:07:48.465 "name": "Nvme1", 00:07:48.465 "trtype": "tcp", 00:07:48.465 "traddr": "10.0.0.2", 00:07:48.465 "adrfam": "ipv4", 00:07:48.465 "trsvcid": "4420", 00:07:48.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.465 "hdgst": false, 00:07:48.465 "ddgst": false 00:07:48.465 }, 00:07:48.465 "method": "bdev_nvme_attach_controller" 00:07:48.465 }' 00:07:48.465 [2024-11-20 08:53:13.825361] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:07:48.465 [2024-11-20 08:53:13.825421] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:48.465 [2024-11-20 08:53:13.826095] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:07:48.465 [2024-11-20 08:53:13.826152] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:48.465 [2024-11-20 08:53:13.830506] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:07:48.465 [2024-11-20 08:53:13.830566] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:48.465 [2024-11-20 08:53:13.830685] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:07:48.465 [2024-11-20 08:53:13.830757] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:48.727 [2024-11-20 08:53:13.998834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.727 [2024-11-20 08:53:14.038227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:48.727 [2024-11-20 08:53:14.072592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.727 [2024-11-20 08:53:14.111663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:48.727 [2024-11-20 08:53:14.139411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.727 [2024-11-20 08:53:14.179071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:48.727 [2024-11-20 08:53:14.230029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.988 [2024-11-20 08:53:14.270737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:48.988 Running I/O for 1 seconds... 00:07:48.988 Running I/O for 1 seconds... 00:07:48.988 Running I/O for 1 seconds... 00:07:49.249 Running I/O for 1 seconds... 00:07:50.198 10573.00 IOPS, 41.30 MiB/s [2024-11-20T07:53:15.727Z] 8816.00 IOPS, 34.44 MiB/s 00:07:50.198 Latency(us) 00:07:50.198 [2024-11-20T07:53:15.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.198 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:50.198 Nvme1n1 : 1.01 10621.65 41.49 0.00 0.00 12003.36 6389.76 19005.44 00:07:50.198 [2024-11-20T07:53:15.727Z] =================================================================================================================== 00:07:50.198 [2024-11-20T07:53:15.727Z] Total : 10621.65 41.49 0.00 0.00 12003.36 6389.76 19005.44 00:07:50.198 00:07:50.198 Latency(us) 00:07:50.198 [2024-11-20T07:53:15.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.198 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:50.198 Nvme1n1 : 1.01 8876.75 34.67 0.00 0.00 14355.10 5952.85 23046.83 00:07:50.198 [2024-11-20T07:53:15.727Z] =================================================================================================================== 00:07:50.198 [2024-11-20T07:53:15.727Z] Total : 8876.75 34.67 0.00 0.00 14355.10 5952.85 23046.83 00:07:50.198 10911.00 IOPS, 42.62 MiB/s 00:07:50.198 Latency(us) 00:07:50.198 [2024-11-20T07:53:15.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.198 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:50.198 Nvme1n1 : 1.01 10988.31 42.92 0.00 0.00 11611.38 4587.52 24248.32 00:07:50.198 [2024-11-20T07:53:15.727Z] =================================================================================================================== 00:07:50.198 [2024-11-20T07:53:15.727Z] Total : 10988.31 42.92 0.00 0.00 11611.38 4587.52 24248.32 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 519107 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 519109 00:07:50.198 186768.00 IOPS, 729.56 MiB/s 00:07:50.198 Latency(us) 00:07:50.198 [2024-11-20T07:53:15.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.198 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:50.198 Nvme1n1 : 1.00 186391.66 728.09 0.00 0.00 683.15 303.79 1993.39 00:07:50.198 [2024-11-20T07:53:15.727Z] =================================================================================================================== 00:07:50.198 [2024-11-20T07:53:15.727Z] Total : 186391.66 728.09 0.00 0.00 683.15 303.79 1993.39 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 519112 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.198 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.198 rmmod nvme_tcp 00:07:50.460 rmmod nvme_fabrics 00:07:50.460 rmmod nvme_keyring 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 519047 ']' 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 519047 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 519047 ']' 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 519047 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 519047 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 519047' 00:07:50.460 killing process with pid 519047 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 519047 00:07:50.460 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 519047 00:07:50.721 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.721 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.721 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.721 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:50.721 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:50.721 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.721 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.721 08:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.721 08:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.721 08:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.721 08:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.721 08:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.637 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:52.637 00:07:52.637 real 0m13.176s 00:07:52.637 user 0m19.815s 00:07:52.637 sys 0m7.454s 00:07:52.637 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.638 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.638 ************************************ 00:07:52.638 END TEST nvmf_bdev_io_wait 00:07:52.638 ************************************ 00:07:52.638 08:53:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:52.638 08:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.638 08:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.638 08:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.899 ************************************ 00:07:52.899 START TEST nvmf_queue_depth 00:07:52.899 ************************************ 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:52.899 * Looking for test storage... 00:07:52.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:52.899 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:52.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.900 --rc genhtml_branch_coverage=1 00:07:52.900 --rc genhtml_function_coverage=1 00:07:52.900 --rc genhtml_legend=1 00:07:52.900 --rc geninfo_all_blocks=1 00:07:52.900 --rc geninfo_unexecuted_blocks=1 00:07:52.900 00:07:52.900 ' 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:52.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.900 --rc genhtml_branch_coverage=1 00:07:52.900 --rc genhtml_function_coverage=1 00:07:52.900 --rc genhtml_legend=1 00:07:52.900 --rc geninfo_all_blocks=1 00:07:52.900 --rc geninfo_unexecuted_blocks=1 00:07:52.900 00:07:52.900 ' 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:52.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.900 --rc genhtml_branch_coverage=1 00:07:52.900 --rc genhtml_function_coverage=1 00:07:52.900 --rc genhtml_legend=1 00:07:52.900 --rc geninfo_all_blocks=1 00:07:52.900 --rc geninfo_unexecuted_blocks=1 00:07:52.900 00:07:52.900 ' 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:52.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.900 --rc genhtml_branch_coverage=1 00:07:52.900 --rc genhtml_function_coverage=1 00:07:52.900 --rc genhtml_legend=1 00:07:52.900 --rc geninfo_all_blocks=1 00:07:52.900 --rc geninfo_unexecuted_blocks=1 00:07:52.900 00:07:52.900 ' 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:52.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:52.900 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:01.047 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:01.047 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:01.047 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:01.047 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.047 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:01.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:08:01.048 00:08:01.048 --- 10.0.0.2 ping statistics --- 00:08:01.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.048 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:08:01.048 00:08:01.048 --- 10.0.0.1 ping statistics --- 00:08:01.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.048 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=523785 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 523785 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 523785 ']' 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.048 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.048 [2024-11-20 08:53:25.987029] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:08:01.048 [2024-11-20 08:53:25.987096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.048 [2024-11-20 08:53:26.088545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.048 [2024-11-20 08:53:26.141110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.048 [2024-11-20 08:53:26.141172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.048 [2024-11-20 08:53:26.141181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.048 [2024-11-20 08:53:26.141189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.048 [2024-11-20 08:53:26.141195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.048 [2024-11-20 08:53:26.142023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.309 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.309 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:01.309 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.309 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:01.309 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.309 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.309 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.309 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.309 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.571 [2024-11-20 08:53:26.840371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.571 Malloc0 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.571 [2024-11-20 08:53:26.901443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=524131 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:01.571 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:01.572 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 524131 /var/tmp/bdevperf.sock 00:08:01.572 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 524131 ']' 00:08:01.572 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:01.572 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.572 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:01.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:01.572 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.572 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.572 [2024-11-20 08:53:26.960106] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:08:01.572 [2024-11-20 08:53:26.960176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524131 ] 00:08:01.572 [2024-11-20 08:53:27.051889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.833 [2024-11-20 08:53:27.104716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.404 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.404 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:02.404 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:02.404 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.404 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.404 NVMe0n1 00:08:02.404 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.404 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:02.665 Running I/O for 10 seconds... 00:08:04.542 9216.00 IOPS, 36.00 MiB/s [2024-11-20T07:53:31.012Z] 10379.50 IOPS, 40.54 MiB/s [2024-11-20T07:53:32.395Z] 10918.33 IOPS, 42.65 MiB/s [2024-11-20T07:53:33.336Z] 11081.50 IOPS, 43.29 MiB/s [2024-11-20T07:53:34.276Z] 11469.00 IOPS, 44.80 MiB/s [2024-11-20T07:53:35.218Z] 11783.67 IOPS, 46.03 MiB/s [2024-11-20T07:53:36.158Z] 12127.86 IOPS, 47.37 MiB/s [2024-11-20T07:53:37.100Z] 12286.62 IOPS, 47.99 MiB/s [2024-11-20T07:53:38.125Z] 12442.56 IOPS, 48.60 MiB/s [2024-11-20T07:53:38.125Z] 12592.10 IOPS, 49.19 MiB/s 00:08:12.596 Latency(us) 00:08:12.596 [2024-11-20T07:53:38.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.596 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:12.596 Verification LBA range: start 0x0 length 0x4000 00:08:12.596 NVMe0n1 : 10.05 12625.86 49.32 0.00 0.00 80846.15 18240.85 73400.32 00:08:12.596 [2024-11-20T07:53:38.125Z] =================================================================================================================== 00:08:12.596 [2024-11-20T07:53:38.125Z] Total : 12625.86 49.32 0.00 0.00 80846.15 18240.85 73400.32 00:08:12.596 { 00:08:12.596 "results": [ 00:08:12.596 { 00:08:12.596 "job": "NVMe0n1", 00:08:12.596 "core_mask": "0x1", 00:08:12.596 "workload": "verify", 00:08:12.596 "status": "finished", 00:08:12.596 "verify_range": { 00:08:12.596 "start": 0, 00:08:12.596 "length": 16384 00:08:12.596 }, 00:08:12.596 "queue_depth": 1024, 00:08:12.596 "io_size": 4096, 00:08:12.596 "runtime": 10.054363, 00:08:12.596 "iops": 12625.862026266606, 00:08:12.596 "mibps": 49.31977354010393, 00:08:12.596 "io_failed": 0, 00:08:12.596 "io_timeout": 0, 00:08:12.596 "avg_latency_us": 80846.15030278204, 00:08:12.596 "min_latency_us": 18240.853333333333, 00:08:12.596 "max_latency_us": 73400.32 00:08:12.596 } 00:08:12.596 ], 00:08:12.596 "core_count": 1 00:08:12.596 } 00:08:12.596 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 524131 00:08:12.596 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 524131 ']' 00:08:12.596 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 524131 00:08:12.596 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:12.596 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.597 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 524131 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 524131' 00:08:12.879 killing process with pid 524131 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 524131 00:08:12.879 Received shutdown signal, test time was about 10.000000 seconds 00:08:12.879 00:08:12.879 Latency(us) 00:08:12.879 [2024-11-20T07:53:38.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.879 [2024-11-20T07:53:38.408Z] =================================================================================================================== 00:08:12.879 [2024-11-20T07:53:38.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 524131 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.879 rmmod nvme_tcp 00:08:12.879 rmmod nvme_fabrics 00:08:12.879 rmmod nvme_keyring 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 523785 ']' 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 523785 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 523785 ']' 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 523785 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 523785 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 523785' 00:08:12.879 killing process with pid 523785 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 523785 00:08:12.879 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 523785 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.140 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.684 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:15.684 00:08:15.684 real 0m22.427s 00:08:15.684 user 0m25.745s 00:08:15.684 sys 0m6.964s 00:08:15.684 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.684 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.684 ************************************ 00:08:15.684 END TEST nvmf_queue_depth 00:08:15.684 ************************************ 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.685 ************************************ 00:08:15.685 START TEST nvmf_target_multipath 00:08:15.685 ************************************ 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:15.685 * Looking for test storage... 00:08:15.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:15.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.685 --rc genhtml_branch_coverage=1 00:08:15.685 --rc genhtml_function_coverage=1 00:08:15.685 --rc genhtml_legend=1 00:08:15.685 --rc geninfo_all_blocks=1 00:08:15.685 --rc geninfo_unexecuted_blocks=1 00:08:15.685 00:08:15.685 ' 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:15.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.685 --rc genhtml_branch_coverage=1 00:08:15.685 --rc genhtml_function_coverage=1 00:08:15.685 --rc genhtml_legend=1 00:08:15.685 --rc geninfo_all_blocks=1 00:08:15.685 --rc geninfo_unexecuted_blocks=1 00:08:15.685 00:08:15.685 ' 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:15.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.685 --rc genhtml_branch_coverage=1 00:08:15.685 --rc genhtml_function_coverage=1 00:08:15.685 --rc genhtml_legend=1 00:08:15.685 --rc geninfo_all_blocks=1 00:08:15.685 --rc geninfo_unexecuted_blocks=1 00:08:15.685 00:08:15.685 ' 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:15.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.685 --rc genhtml_branch_coverage=1 00:08:15.685 --rc genhtml_function_coverage=1 00:08:15.685 --rc genhtml_legend=1 00:08:15.685 --rc geninfo_all_blocks=1 00:08:15.685 --rc geninfo_unexecuted_blocks=1 00:08:15.685 00:08:15.685 ' 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:15.685 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.686 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.832 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:23.833 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:23.833 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:23.833 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:23.833 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:08:23.833 00:08:23.833 --- 10.0.0.2 ping statistics --- 00:08:23.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.833 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:08:23.833 00:08:23.833 --- 10.0.0.1 ping statistics --- 00:08:23.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.833 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:23.833 only one NIC for nvmf test 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.833 rmmod nvme_tcp 00:08:23.833 rmmod nvme_fabrics 00:08:23.833 rmmod nvme_keyring 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:23.833 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:23.834 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:23.834 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.834 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.834 08:53:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:25.219 00:08:25.219 real 0m9.972s 00:08:25.219 user 0m2.216s 00:08:25.219 sys 0m5.697s 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:25.219 ************************************ 00:08:25.219 END TEST nvmf_target_multipath 00:08:25.219 ************************************ 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.219 ************************************ 00:08:25.219 START TEST nvmf_zcopy 00:08:25.219 ************************************ 00:08:25.219 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:25.481 * Looking for test storage... 00:08:25.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:25.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.481 --rc genhtml_branch_coverage=1 00:08:25.481 --rc genhtml_function_coverage=1 00:08:25.481 --rc genhtml_legend=1 00:08:25.481 --rc geninfo_all_blocks=1 00:08:25.481 --rc geninfo_unexecuted_blocks=1 00:08:25.481 00:08:25.481 ' 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:25.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.481 --rc genhtml_branch_coverage=1 00:08:25.481 --rc genhtml_function_coverage=1 00:08:25.481 --rc genhtml_legend=1 00:08:25.481 --rc geninfo_all_blocks=1 00:08:25.481 --rc geninfo_unexecuted_blocks=1 00:08:25.481 00:08:25.481 ' 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:25.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.481 --rc genhtml_branch_coverage=1 00:08:25.481 --rc genhtml_function_coverage=1 00:08:25.481 --rc genhtml_legend=1 00:08:25.481 --rc geninfo_all_blocks=1 00:08:25.481 --rc geninfo_unexecuted_blocks=1 00:08:25.481 00:08:25.481 ' 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:25.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.481 --rc genhtml_branch_coverage=1 00:08:25.481 --rc genhtml_function_coverage=1 00:08:25.481 --rc genhtml_legend=1 00:08:25.481 --rc geninfo_all_blocks=1 00:08:25.481 --rc geninfo_unexecuted_blocks=1 00:08:25.481 00:08:25.481 ' 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.481 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:25.482 08:53:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.626 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.626 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:33.626 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:33.626 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:33.626 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:33.626 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:33.626 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:33.627 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:33.627 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:33.627 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:33.627 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:08:33.627 00:08:33.627 --- 10.0.0.2 ping statistics --- 00:08:33.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.627 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:08:33.627 00:08:33.627 --- 10.0.0.1 ping statistics --- 00:08:33.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.627 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.627 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=534845 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 534845 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 534845 ']' 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.628 08:53:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.628 [2024-11-20 08:53:58.546005] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:08:33.628 [2024-11-20 08:53:58.546075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.628 [2024-11-20 08:53:58.644605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.628 [2024-11-20 08:53:58.694381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.628 [2024-11-20 08:53:58.694435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.628 [2024-11-20 08:53:58.694444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.628 [2024-11-20 08:53:58.694451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.628 [2024-11-20 08:53:58.694457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.628 [2024-11-20 08:53:58.695282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.888 [2024-11-20 08:53:59.404969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.888 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.148 [2024-11-20 08:53:59.429268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.148 malloc0 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:34.148 { 00:08:34.148 "params": { 00:08:34.148 "name": "Nvme$subsystem", 00:08:34.148 "trtype": "$TEST_TRANSPORT", 00:08:34.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:34.148 "adrfam": "ipv4", 00:08:34.148 "trsvcid": "$NVMF_PORT", 00:08:34.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:34.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:34.148 "hdgst": ${hdgst:-false}, 00:08:34.148 "ddgst": ${ddgst:-false} 00:08:34.148 }, 00:08:34.148 "method": "bdev_nvme_attach_controller" 00:08:34.148 } 00:08:34.148 EOF 00:08:34.148 )") 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:34.148 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:34.148 "params": { 00:08:34.148 "name": "Nvme1", 00:08:34.148 "trtype": "tcp", 00:08:34.148 "traddr": "10.0.0.2", 00:08:34.148 "adrfam": "ipv4", 00:08:34.148 "trsvcid": "4420", 00:08:34.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:34.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:34.148 "hdgst": false, 00:08:34.148 "ddgst": false 00:08:34.148 }, 00:08:34.148 "method": "bdev_nvme_attach_controller" 00:08:34.148 }' 00:08:34.148 [2024-11-20 08:53:59.531689] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:08:34.148 [2024-11-20 08:53:59.531753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid535051 ] 00:08:34.149 [2024-11-20 08:53:59.623887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.408 [2024-11-20 08:53:59.677883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.408 Running I/O for 10 seconds... 00:08:36.729 6473.00 IOPS, 50.57 MiB/s [2024-11-20T07:54:03.197Z] 7140.00 IOPS, 55.78 MiB/s [2024-11-20T07:54:04.138Z] 8009.33 IOPS, 62.57 MiB/s [2024-11-20T07:54:05.080Z] 8446.50 IOPS, 65.99 MiB/s [2024-11-20T07:54:06.022Z] 8707.40 IOPS, 68.03 MiB/s [2024-11-20T07:54:06.962Z] 8884.83 IOPS, 69.41 MiB/s [2024-11-20T07:54:08.344Z] 9007.57 IOPS, 70.37 MiB/s [2024-11-20T07:54:09.284Z] 9102.50 IOPS, 71.11 MiB/s [2024-11-20T07:54:10.225Z] 9171.89 IOPS, 71.66 MiB/s [2024-11-20T07:54:10.225Z] 9231.20 IOPS, 72.12 MiB/s 00:08:44.696 Latency(us) 00:08:44.696 [2024-11-20T07:54:10.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.696 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:44.696 Verification LBA range: start 0x0 length 0x1000 00:08:44.696 Nvme1n1 : 10.01 9232.23 72.13 0.00 0.00 13818.44 665.60 28398.93 00:08:44.696 [2024-11-20T07:54:10.225Z] =================================================================================================================== 00:08:44.696 [2024-11-20T07:54:10.225Z] Total : 9232.23 72.13 0.00 0.00 13818.44 665.60 28398.93 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=537563 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:44.696 { 00:08:44.696 "params": { 00:08:44.696 "name": "Nvme$subsystem", 00:08:44.696 "trtype": "$TEST_TRANSPORT", 00:08:44.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:44.696 "adrfam": "ipv4", 00:08:44.696 "trsvcid": "$NVMF_PORT", 00:08:44.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:44.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:44.696 "hdgst": ${hdgst:-false}, 00:08:44.696 "ddgst": ${ddgst:-false} 00:08:44.696 }, 00:08:44.696 "method": "bdev_nvme_attach_controller" 00:08:44.696 } 00:08:44.696 EOF 00:08:44.696 )") 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:44.696 [2024-11-20 08:54:10.030078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.030107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:44.696 08:54:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:44.696 "params": { 00:08:44.696 "name": "Nvme1", 00:08:44.696 "trtype": "tcp", 00:08:44.696 "traddr": "10.0.0.2", 00:08:44.696 "adrfam": "ipv4", 00:08:44.696 "trsvcid": "4420", 00:08:44.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:44.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:44.696 "hdgst": false, 00:08:44.696 "ddgst": false 00:08:44.696 }, 00:08:44.696 "method": "bdev_nvme_attach_controller" 00:08:44.696 }' 00:08:44.696 [2024-11-20 08:54:10.042078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.042087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.054106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.054114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.066136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.066149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.073809] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:08:44.696 [2024-11-20 08:54:10.073856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid537563 ] 00:08:44.696 [2024-11-20 08:54:10.078169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.078177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.090201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.090209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.102231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.102239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.114262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.114270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.126292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.126300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.138323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.138330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.150354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.150361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.157100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.696 [2024-11-20 08:54:10.162384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.162393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.174414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.174423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.186446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.186454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.187025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.696 [2024-11-20 08:54:10.198480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.198490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.696 [2024-11-20 08:54:10.210515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.696 [2024-11-20 08:54:10.210527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.222541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.222555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.234570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.234579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.246602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.246610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.258644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.258663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.270669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.270679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.282702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.282713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.294736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.294745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.306767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.306776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.318801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.318815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 Running I/O for 5 seconds... 00:08:44.957 [2024-11-20 08:54:10.330825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.330833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.346164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.346180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.359621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.359637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.372684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.372700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.385612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.385628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.398789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.398804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.411904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.411919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.425106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.425121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.437895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.437911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.451208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.451224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.464521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.464537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.957 [2024-11-20 08:54:10.477525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.957 [2024-11-20 08:54:10.477540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.490393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.490408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.503444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.503463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.516733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.516747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.529881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.529896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.543140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.543155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.555805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.555820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.568651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.568665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.581785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.581800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.595005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.595020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.607948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.607962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.621090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.621105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.634047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.634061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.647093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.647108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.660710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.660724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.673685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.673700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.686696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.686711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.700151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.700170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.712896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.712911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.726183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.726197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.217 [2024-11-20 08:54:10.739241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.217 [2024-11-20 08:54:10.739255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.751948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.751963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.765248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.765263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.778350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.778365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.791135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.791150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.803931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.803945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.816786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.816800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.829800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.829815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.842835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.842850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.856113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.856128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.869103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.869117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.882157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.882175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.895422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.895436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.908206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.908220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.920647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.920662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.933478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.933492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.945964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.945979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.959037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.959052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.972085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.972100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.985618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.985633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.478 [2024-11-20 08:54:10.998558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.478 [2024-11-20 08:54:10.998573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.739 [2024-11-20 08:54:11.011751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.739 [2024-11-20 08:54:11.011767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.739 [2024-11-20 08:54:11.025235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.739 [2024-11-20 08:54:11.025259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.739 [2024-11-20 08:54:11.038397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.739 [2024-11-20 08:54:11.038412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.739 [2024-11-20 08:54:11.051272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.051287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.064360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.064374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.077745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.077759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.090860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.090875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.103944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.103958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.117230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.117244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.130374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.130389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.143757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.143772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.156245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.156260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.169473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.169490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.182302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.182317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.195175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.195191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.208196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.208211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.221376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.221391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.233707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.233722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.246546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.246561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.740 [2024-11-20 08:54:11.260057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.740 [2024-11-20 08:54:11.260072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.273104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.273120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.286321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.286336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.299369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.299384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.312841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.312856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.325797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.325812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 19307.00 IOPS, 150.84 MiB/s [2024-11-20T07:54:11.530Z] [2024-11-20 08:54:11.339607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.339622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.352972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.352987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.365772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.365786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.378850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.378865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.391814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.391829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.404958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.404974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.418261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.418276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.431509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.431525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.444531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.444545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.457362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.457377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.470106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.470121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.483241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.483261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.495940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.495955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.508851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.508866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.001 [2024-11-20 08:54:11.522175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.001 [2024-11-20 08:54:11.522190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.535144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.535162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.548487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.548502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.561400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.561415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.573790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.573805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.586943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.586958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.599942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.599958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.613029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.613044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.626242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.626257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.639400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.639416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.652221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.652236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.665206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.665221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.678786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.678801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.691497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.691512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.705023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.705038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.717832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.717847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.731045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.731064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.744299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.744314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.756457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.756472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.769170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.769186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.262 [2024-11-20 08:54:11.782357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.262 [2024-11-20 08:54:11.782371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.795514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.795530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.808571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.808585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.821459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.821474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.834517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.834531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.847543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.847558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.860850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.860867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.874137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.874152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.886839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.886854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.900500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.900515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.913559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.913574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.926698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.926712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.939437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.939451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.952314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.952328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.965468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.965482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.978195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.978213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:11.990834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:11.990848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:12.003438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:12.003453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:12.016790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:12.016805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:12.029821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:12.029835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.523 [2024-11-20 08:54:12.042877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.523 [2024-11-20 08:54:12.042892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.056677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.056693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.069520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.069535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.082656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.082670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.095252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.095267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.108003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.108018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.121152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.121172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.134435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.134449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.147332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.147347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.159903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.159917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.172823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.172837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.185896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.185911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.198760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.198775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.212011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.212025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.225058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.225073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.238305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.238320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.251384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.251399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.264272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.264287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.278195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.278210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.291281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.291295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.785 [2024-11-20 08:54:12.304461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.785 [2024-11-20 08:54:12.304475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.316707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.316722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.329990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.330005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 19349.50 IOPS, 151.17 MiB/s [2024-11-20T07:54:12.575Z] [2024-11-20 08:54:12.342990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.343006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.356110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.356125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.368964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.368978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.382017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.382033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.395112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.395127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.407954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.407969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.421197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.421212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.434330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.434344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.447227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.447241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.460676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.460690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.473927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.473942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.487029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.487043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.500005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.500020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.513490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.513504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.526752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.526766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.539842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.539857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.553033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.553048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.046 [2024-11-20 08:54:12.566465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.046 [2024-11-20 08:54:12.566479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.578963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.578978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.592428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.592443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.605565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.605579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.618220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.618235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.631305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.631320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.644663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.644678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.658168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.658182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.671380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.671394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.684357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.684372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.697349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.697364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.710491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.710505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.723510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.723525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.736138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.736153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.748633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.748647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.761956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.761970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.774839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.774853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.787772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.787786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.800871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.800886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.813633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.813647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.307 [2024-11-20 08:54:12.827077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.307 [2024-11-20 08:54:12.827092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.568 [2024-11-20 08:54:12.840300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.568 [2024-11-20 08:54:12.840316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.568 [2024-11-20 08:54:12.853039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.568 [2024-11-20 08:54:12.853054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.568 [2024-11-20 08:54:12.866301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.568 [2024-11-20 08:54:12.866316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.568 [2024-11-20 08:54:12.878890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.568 [2024-11-20 08:54:12.878905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.568 [2024-11-20 08:54:12.892057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.568 [2024-11-20 08:54:12.892072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.568 [2024-11-20 08:54:12.905486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.568 [2024-11-20 08:54:12.905502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:12.917887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:12.917901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:12.930932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:12.930946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:12.944441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:12.944456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:12.957423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:12.957442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:12.970649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:12.970664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:12.983370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:12.983385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:12.996911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:12.996926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:13.009747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:13.009762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:13.023463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:13.023478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:13.036011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:13.036025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:13.048734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:13.048749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:13.061282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:13.061296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:13.074175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:13.074190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.569 [2024-11-20 08:54:13.087203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.569 [2024-11-20 08:54:13.087217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.100345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.100361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.113274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.113289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.126112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.126127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.139147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.139167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.152365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.152380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.165859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.165874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.178621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.178635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.191777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.191791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.204870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.204889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.218070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.218085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.230581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.230596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.243863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.243878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.257071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.257086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.270042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.270057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.283070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.283085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.296172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.296186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.309295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.309310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.322382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.322396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 [2024-11-20 08:54:13.335524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.335539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.830 19398.00 IOPS, 151.55 MiB/s [2024-11-20T07:54:13.359Z] [2024-11-20 08:54:13.348761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.830 [2024-11-20 08:54:13.348776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.090 [2024-11-20 08:54:13.361900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.090 [2024-11-20 08:54:13.361916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.090 [2024-11-20 08:54:13.374802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.090 [2024-11-20 08:54:13.374818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.090 [2024-11-20 08:54:13.387691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.090 [2024-11-20 08:54:13.387706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.090 [2024-11-20 08:54:13.401256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.090 [2024-11-20 08:54:13.401270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.090 [2024-11-20 08:54:13.413899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.090 [2024-11-20 08:54:13.413914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.090 [2024-11-20 08:54:13.427336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.090 [2024-11-20 08:54:13.427351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.090 [2024-11-20 08:54:13.440527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.090 [2024-11-20 08:54:13.440543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.090 [2024-11-20 08:54:13.453943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.090 [2024-11-20 08:54:13.453962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.090 [2024-11-20 08:54:13.467261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.090 [2024-11-20 08:54:13.467276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.090 [2024-11-20 08:54:13.480382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.091 [2024-11-20 08:54:13.480397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.091 [2024-11-20 08:54:13.493446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.091 [2024-11-20 08:54:13.493461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.091 [2024-11-20 08:54:13.506599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.091 [2024-11-20 08:54:13.506614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.091 [2024-11-20 08:54:13.520015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.091 [2024-11-20 08:54:13.520030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.091 [2024-11-20 08:54:13.533590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.091 [2024-11-20 08:54:13.533604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.091 [2024-11-20 08:54:13.546346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.091 [2024-11-20 08:54:13.546361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.091 [2024-11-20 08:54:13.559573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.091 [2024-11-20 08:54:13.559588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.091 [2024-11-20 08:54:13.572991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.091 [2024-11-20 08:54:13.573006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.091 [2024-11-20 08:54:13.585699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.091 [2024-11-20 08:54:13.585714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.091 [2024-11-20 08:54:13.598732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.091 [2024-11-20 08:54:13.598747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.091 [2024-11-20 08:54:13.611623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.091 [2024-11-20 08:54:13.611638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.624757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.624772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.637370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.637384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.650167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.650182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.662909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.662924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.675623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.675638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.688700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.688715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.701788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.701802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.714921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.714936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.727993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.728007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.740814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.740828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.754165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.754179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.767067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.767081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.779737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.779751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.792887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.792902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.805981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.805995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.819096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.819111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.832163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.832177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.845195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.845209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.858242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.858257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.351 [2024-11-20 08:54:13.871494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.351 [2024-11-20 08:54:13.871509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:13.884500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:13.884516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:13.897431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:13.897445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:13.910294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:13.910309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:13.923473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:13.923488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:13.936411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:13.936426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:13.949918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:13.949935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:13.963225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:13.963240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:13.976472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:13.976487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:13.989952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:13.989966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:14.002378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:14.002392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:14.015634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:14.015649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:14.028864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:14.028879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:14.041725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:14.041740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:14.055130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:14.055145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:14.068450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:14.068465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:14.081794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:14.081809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:14.094478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:14.094493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:14.107938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:14.107953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:14.120430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:14.120444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.612 [2024-11-20 08:54:14.134183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.612 [2024-11-20 08:54:14.134198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.147587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.147602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.160892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.160907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.174274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.174289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.187108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.187123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.199615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.199631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.212480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.212495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.225628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.225643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.239011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.239026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.252428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.252444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.265816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.265831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.279357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.279371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.291947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.291961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.305131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.305145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.318194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.318209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.331359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.331373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 19384.50 IOPS, 151.44 MiB/s [2024-11-20T07:54:14.403Z] [2024-11-20 08:54:14.344981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.344996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.358233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.358248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.371875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.371890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.384513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.384528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.874 [2024-11-20 08:54:14.397875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.874 [2024-11-20 08:54:14.397891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.410902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.410917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.424306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.424321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.436960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.436979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.449504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.449518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.462466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.462482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.475587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.475602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.488931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.488946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.502329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.502344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.515871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.515886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.529621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.529636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.542447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.542462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.555668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.555684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.568500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.568515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.581860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.581875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.595367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.595382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.608686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.608702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.621572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.621587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.634900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.634914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.647620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.647635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.134 [2024-11-20 08:54:14.660004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.134 [2024-11-20 08:54:14.660019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.673910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.673925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.686753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.686775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.700135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.700150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.712727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.712744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.725537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.725552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.738875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.738890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.751593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.751609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.764763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.764778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.777398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.777413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.790080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.790096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.803011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.803027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.816247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.816262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.829647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.829663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.843219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.843234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.856630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.856646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.870229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.870244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.883254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.883269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.896450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.896465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.395 [2024-11-20 08:54:14.909801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.395 [2024-11-20 08:54:14.909815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.656 [2024-11-20 08:54:14.923188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.656 [2024-11-20 08:54:14.923204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:14.936224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:14.936243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:14.949448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:14.949462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:14.962952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:14.962967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:14.975723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:14.975739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:14.988697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:14.988712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.002215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.002230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.014837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.014852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.028216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.028231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.041509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.041525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.054769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.054784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.067982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.067997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.081784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.081799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.094091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.094106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.107028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.107043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.119711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.119727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.132868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.132882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.146282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.146297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.159010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.159025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.657 [2024-11-20 08:54:15.172034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.657 [2024-11-20 08:54:15.172049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.185418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.185437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.198640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.198655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.211546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.211561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.224168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.224183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.237398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.237414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.250212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.250227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.262727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.262742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.275809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.275823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.288259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.288274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.301007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.301021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.313473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.313487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.326966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.326980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.339510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.339525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 19366.40 IOPS, 151.30 MiB/s 00:08:49.918 Latency(us) 00:08:49.918 [2024-11-20T07:54:15.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.918 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:49.918 Nvme1n1 : 5.01 19369.38 151.32 0.00 0.00 6603.39 3031.04 16384.00 00:08:49.918 [2024-11-20T07:54:15.447Z] =================================================================================================================== 00:08:49.918 [2024-11-20T07:54:15.447Z] Total : 19369.38 151.32 0.00 0.00 6603.39 3031.04 16384.00 00:08:49.918 [2024-11-20 08:54:15.349293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.349306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.361322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.361335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.373359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.373373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.385387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.385399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.397415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.397426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.409443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.409453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.421474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.421482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.918 [2024-11-20 08:54:15.433507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.918 [2024-11-20 08:54:15.433517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.178 [2024-11-20 08:54:15.445535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.178 [2024-11-20 08:54:15.445544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (537563) - No such process 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 537563 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.178 delay0 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.178 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:50.178 [2024-11-20 08:54:15.660355] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:58.318 Initializing NVMe Controllers 00:08:58.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:58.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:58.318 Initialization complete. Launching workers. 00:08:58.318 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 247, failed: 32681 00:08:58.318 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32791, failed to submit 137 00:08:58.318 success 32725, unsuccessful 66, failed 0 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.318 rmmod nvme_tcp 00:08:58.318 rmmod nvme_fabrics 00:08:58.318 rmmod nvme_keyring 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 534845 ']' 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 534845 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 534845 ']' 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 534845 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 534845 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 534845' 00:08:58.318 killing process with pid 534845 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 534845 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 534845 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.318 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.702 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:59.702 00:08:59.702 real 0m34.301s 00:08:59.702 user 0m45.055s 00:08:59.702 sys 0m11.821s 00:08:59.702 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.702 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.702 ************************************ 00:08:59.702 END TEST nvmf_zcopy 00:08:59.702 ************************************ 00:08:59.702 08:54:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:59.702 08:54:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.702 08:54:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.702 08:54:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.702 ************************************ 00:08:59.702 START TEST nvmf_nmic 00:08:59.702 ************************************ 00:08:59.702 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:59.702 * Looking for test storage... 00:08:59.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.702 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:59.702 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:59.702 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:59.963 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:59.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.964 --rc genhtml_branch_coverage=1 00:08:59.964 --rc genhtml_function_coverage=1 00:08:59.964 --rc genhtml_legend=1 00:08:59.964 --rc geninfo_all_blocks=1 00:08:59.964 --rc geninfo_unexecuted_blocks=1 00:08:59.964 00:08:59.964 ' 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:59.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.964 --rc genhtml_branch_coverage=1 00:08:59.964 --rc genhtml_function_coverage=1 00:08:59.964 --rc genhtml_legend=1 00:08:59.964 --rc geninfo_all_blocks=1 00:08:59.964 --rc geninfo_unexecuted_blocks=1 00:08:59.964 00:08:59.964 ' 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:59.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.964 --rc genhtml_branch_coverage=1 00:08:59.964 --rc genhtml_function_coverage=1 00:08:59.964 --rc genhtml_legend=1 00:08:59.964 --rc geninfo_all_blocks=1 00:08:59.964 --rc geninfo_unexecuted_blocks=1 00:08:59.964 00:08:59.964 ' 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:59.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.964 --rc genhtml_branch_coverage=1 00:08:59.964 --rc genhtml_function_coverage=1 00:08:59.964 --rc genhtml_legend=1 00:08:59.964 --rc geninfo_all_blocks=1 00:08:59.964 --rc geninfo_unexecuted_blocks=1 00:08:59.964 00:08:59.964 ' 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.964 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.965 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.965 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.965 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:59.965 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:59.965 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.965 08:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:08.257 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:08.257 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:08.257 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:08.257 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.257 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:09:08.258 00:09:08.258 --- 10.0.0.2 ping statistics --- 00:09:08.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.258 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:09:08.258 00:09:08.258 --- 10.0.0.1 ping statistics --- 00:09:08.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.258 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=544468 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 544468 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 544468 ']' 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.258 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.258 [2024-11-20 08:54:32.901735] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:09:08.258 [2024-11-20 08:54:32.901801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.258 [2024-11-20 08:54:33.000553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.258 [2024-11-20 08:54:33.055839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.258 [2024-11-20 08:54:33.055893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.258 [2024-11-20 08:54:33.055901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.258 [2024-11-20 08:54:33.055909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.258 [2024-11-20 08:54:33.055915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.258 [2024-11-20 08:54:33.058108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.258 [2024-11-20 08:54:33.058273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.258 [2024-11-20 08:54:33.058321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.258 [2024-11-20 08:54:33.058322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.258 [2024-11-20 08:54:33.771387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.258 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.519 Malloc0 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.519 [2024-11-20 08:54:33.854302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:08.519 test case1: single bdev can't be used in multiple subsystems 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.519 [2024-11-20 08:54:33.890115] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:08.519 [2024-11-20 08:54:33.890142] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:08.519 [2024-11-20 08:54:33.890151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.519 request: 00:09:08.519 { 00:09:08.519 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:08.519 "namespace": { 00:09:08.519 "bdev_name": "Malloc0", 00:09:08.519 "no_auto_visible": false 00:09:08.519 }, 00:09:08.519 "method": "nvmf_subsystem_add_ns", 00:09:08.519 "req_id": 1 00:09:08.519 } 00:09:08.519 Got JSON-RPC error response 00:09:08.519 response: 00:09:08.519 { 00:09:08.519 "code": -32602, 00:09:08.519 "message": "Invalid parameters" 00:09:08.519 } 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:08.519 Adding namespace failed - expected result. 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:08.519 test case2: host connect to nvmf target in multiple paths 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.519 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.519 [2024-11-20 08:54:33.902334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:08.520 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.520 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.903 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:11.816 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.816 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:11.816 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.816 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:11.816 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:13.749 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:13.749 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:13.749 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.749 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:13.749 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.749 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:13.749 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:13.749 [global] 00:09:13.749 thread=1 00:09:13.749 invalidate=1 00:09:13.749 rw=write 00:09:13.749 time_based=1 00:09:13.749 runtime=1 00:09:13.749 ioengine=libaio 00:09:13.749 direct=1 00:09:13.749 bs=4096 00:09:13.749 iodepth=1 00:09:13.749 norandommap=0 00:09:13.749 numjobs=1 00:09:13.749 00:09:13.749 verify_dump=1 00:09:13.749 verify_backlog=512 00:09:13.749 verify_state_save=0 00:09:13.749 do_verify=1 00:09:13.749 verify=crc32c-intel 00:09:13.749 [job0] 00:09:13.749 filename=/dev/nvme0n1 00:09:13.749 Could not set queue depth (nvme0n1) 00:09:14.011 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.011 fio-3.35 00:09:14.011 Starting 1 thread 00:09:14.953 00:09:14.953 job0: (groupid=0, jobs=1): err= 0: pid=545840: Wed Nov 20 08:54:40 2024 00:09:14.953 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:14.953 slat (nsec): min=3308, max=57540, avg=23218.72, stdev=6951.06 00:09:14.953 clat (usec): min=281, max=1595, avg=976.86, stdev=150.48 00:09:14.953 lat (usec): min=303, max=1620, avg=1000.08, stdev=153.66 00:09:14.953 clat percentiles (usec): 00:09:14.953 | 1.00th=[ 594], 5.00th=[ 701], 10.00th=[ 766], 20.00th=[ 857], 00:09:14.953 | 30.00th=[ 914], 40.00th=[ 971], 50.00th=[ 1004], 60.00th=[ 1037], 00:09:14.953 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:09:14.953 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1598], 99.95th=[ 1598], 00:09:14.953 | 99.99th=[ 1598] 00:09:14.953 write: IOPS=800, BW=3201KiB/s (3278kB/s)(3204KiB/1001msec); 0 zone resets 00:09:14.953 slat (nsec): min=9477, max=67478, avg=27267.89, stdev=10674.27 00:09:14.953 clat (usec): min=220, max=825, avg=571.46, stdev=101.01 00:09:14.953 lat (usec): min=231, max=858, avg=598.72, stdev=105.88 00:09:14.953 clat percentiles (usec): 00:09:14.953 | 1.00th=[ 338], 5.00th=[ 400], 10.00th=[ 433], 20.00th=[ 486], 00:09:14.953 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:09:14.953 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 717], 00:09:14.953 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 824], 99.95th=[ 824], 00:09:14.953 | 99.99th=[ 824] 00:09:14.953 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:14.953 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:14.953 lat (usec) : 250=0.08%, 500=15.61%, 750=47.37%, 1000=16.98% 00:09:14.953 lat (msec) : 2=19.95% 00:09:14.953 cpu : usr=1.90%, sys=3.90%, ctx=1314, majf=0, minf=1 00:09:14.953 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.953 issued rwts: total=512,801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.953 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.953 00:09:14.953 Run status group 0 (all jobs): 00:09:14.953 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:14.953 WRITE: bw=3201KiB/s (3278kB/s), 3201KiB/s-3201KiB/s (3278kB/s-3278kB/s), io=3204KiB (3281kB), run=1001-1001msec 00:09:14.953 00:09:14.953 Disk stats (read/write): 00:09:14.953 nvme0n1: ios=562/621, merge=0/0, ticks=550/343, in_queue=893, util=93.89% 00:09:14.953 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:15.214 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.215 rmmod nvme_tcp 00:09:15.215 rmmod nvme_fabrics 00:09:15.215 rmmod nvme_keyring 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 544468 ']' 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 544468 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 544468 ']' 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 544468 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.215 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 544468 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 544468' 00:09:15.476 killing process with pid 544468 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 544468 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 544468 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.476 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.019 08:54:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:18.019 00:09:18.019 real 0m17.833s 00:09:18.019 user 0m48.470s 00:09:18.019 sys 0m6.651s 00:09:18.019 08:54:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.019 08:54:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.019 ************************************ 00:09:18.019 END TEST nvmf_nmic 00:09:18.019 ************************************ 00:09:18.019 08:54:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:18.019 08:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:18.019 08:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.019 08:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.019 ************************************ 00:09:18.019 START TEST nvmf_fio_target 00:09:18.019 ************************************ 00:09:18.019 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:18.019 * Looking for test storage... 00:09:18.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.019 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.019 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.019 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.020 --rc genhtml_branch_coverage=1 00:09:18.020 --rc genhtml_function_coverage=1 00:09:18.020 --rc genhtml_legend=1 00:09:18.020 --rc geninfo_all_blocks=1 00:09:18.020 --rc geninfo_unexecuted_blocks=1 00:09:18.020 00:09:18.020 ' 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.020 --rc genhtml_branch_coverage=1 00:09:18.020 --rc genhtml_function_coverage=1 00:09:18.020 --rc genhtml_legend=1 00:09:18.020 --rc geninfo_all_blocks=1 00:09:18.020 --rc geninfo_unexecuted_blocks=1 00:09:18.020 00:09:18.020 ' 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.020 --rc genhtml_branch_coverage=1 00:09:18.020 --rc genhtml_function_coverage=1 00:09:18.020 --rc genhtml_legend=1 00:09:18.020 --rc geninfo_all_blocks=1 00:09:18.020 --rc geninfo_unexecuted_blocks=1 00:09:18.020 00:09:18.020 ' 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.020 --rc genhtml_branch_coverage=1 00:09:18.020 --rc genhtml_function_coverage=1 00:09:18.020 --rc genhtml_legend=1 00:09:18.020 --rc geninfo_all_blocks=1 00:09:18.020 --rc geninfo_unexecuted_blocks=1 00:09:18.020 00:09:18.020 ' 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:18.020 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:18.021 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:26.164 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:26.164 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:26.164 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:26.164 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.164 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:26.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:09:26.165 00:09:26.165 --- 10.0.0.2 ping statistics --- 00:09:26.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.165 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:09:26.165 00:09:26.165 --- 10.0.0.1 ping statistics --- 00:09:26.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.165 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=550369 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 550369 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 550369 ']' 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.165 08:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.165 [2024-11-20 08:54:50.833364] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:09:26.165 [2024-11-20 08:54:50.833430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.165 [2024-11-20 08:54:50.932262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.165 [2024-11-20 08:54:50.985647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.165 [2024-11-20 08:54:50.985696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.165 [2024-11-20 08:54:50.985704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.165 [2024-11-20 08:54:50.985712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.165 [2024-11-20 08:54:50.985718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.165 [2024-11-20 08:54:50.987751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.165 [2024-11-20 08:54:50.987912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.165 [2024-11-20 08:54:50.988079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.165 [2024-11-20 08:54:50.988079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.165 08:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.165 08:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:26.165 08:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.165 08:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.165 08:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.426 08:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.426 08:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:26.426 [2024-11-20 08:54:51.868737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.426 08:54:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.686 08:54:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:26.686 08:54:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.948 08:54:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:26.948 08:54:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.208 08:54:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:27.208 08:54:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.469 08:54:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:27.469 08:54:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:27.469 08:54:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.730 08:54:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:27.730 08:54:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.991 08:54:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:27.991 08:54:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:28.251 08:54:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:28.251 08:54:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:28.512 08:54:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:28.512 08:54:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:28.512 08:54:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.773 08:54:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:28.773 08:54:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:29.034 08:54:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.034 [2024-11-20 08:54:54.509880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.034 08:54:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:29.295 08:54:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:29.557 08:54:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.941 08:54:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:30.941 08:54:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:30.941 08:54:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.941 08:54:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:30.941 08:54:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:30.941 08:54:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:33.485 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:33.485 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:33.485 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.485 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:33.485 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.485 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:33.485 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:33.485 [global] 00:09:33.485 thread=1 00:09:33.485 invalidate=1 00:09:33.485 rw=write 00:09:33.485 time_based=1 00:09:33.485 runtime=1 00:09:33.485 ioengine=libaio 00:09:33.485 direct=1 00:09:33.485 bs=4096 00:09:33.485 iodepth=1 00:09:33.485 norandommap=0 00:09:33.485 numjobs=1 00:09:33.485 00:09:33.485 verify_dump=1 00:09:33.485 verify_backlog=512 00:09:33.485 verify_state_save=0 00:09:33.485 do_verify=1 00:09:33.485 verify=crc32c-intel 00:09:33.485 [job0] 00:09:33.485 filename=/dev/nvme0n1 00:09:33.485 [job1] 00:09:33.485 filename=/dev/nvme0n2 00:09:33.485 [job2] 00:09:33.485 filename=/dev/nvme0n3 00:09:33.485 [job3] 00:09:33.485 filename=/dev/nvme0n4 00:09:33.485 Could not set queue depth (nvme0n1) 00:09:33.485 Could not set queue depth (nvme0n2) 00:09:33.485 Could not set queue depth (nvme0n3) 00:09:33.485 Could not set queue depth (nvme0n4) 00:09:33.485 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.485 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.485 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.485 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.485 fio-3.35 00:09:33.485 Starting 4 threads 00:09:34.867 00:09:34.867 job0: (groupid=0, jobs=1): err= 0: pid=552284: Wed Nov 20 08:55:00 2024 00:09:34.867 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:34.867 slat (nsec): min=6738, max=63365, avg=27892.60, stdev=3634.51 00:09:34.867 clat (usec): min=655, max=1225, avg=950.69, stdev=71.42 00:09:34.867 lat (usec): min=674, max=1241, avg=978.58, stdev=71.46 00:09:34.867 clat percentiles (usec): 00:09:34.867 | 1.00th=[ 725], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 906], 00:09:34.867 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:09:34.867 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1037], 00:09:34.867 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:34.867 | 99.99th=[ 1221] 00:09:34.867 write: IOPS=826, BW=3305KiB/s (3384kB/s)(3308KiB/1001msec); 0 zone resets 00:09:34.867 slat (nsec): min=9376, max=72111, avg=31582.07, stdev=10562.08 00:09:34.867 clat (usec): min=127, max=4096, avg=559.43, stdev=183.71 00:09:34.867 lat (usec): min=138, max=4110, avg=591.01, stdev=185.92 00:09:34.867 clat percentiles (usec): 00:09:34.867 | 1.00th=[ 227], 5.00th=[ 347], 10.00th=[ 392], 20.00th=[ 453], 00:09:34.867 | 30.00th=[ 486], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 594], 00:09:34.867 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 725], 00:09:34.867 | 99.00th=[ 791], 99.50th=[ 807], 99.90th=[ 4113], 99.95th=[ 4113], 00:09:34.867 | 99.99th=[ 4113] 00:09:34.867 bw ( KiB/s): min= 4096, max= 4096, per=43.91%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.867 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.867 lat (usec) : 250=0.82%, 500=19.94%, 750=39.96%, 1000=30.47% 00:09:34.867 lat (msec) : 2=8.66%, 4=0.07%, 10=0.07% 00:09:34.867 cpu : usr=2.90%, sys=5.20%, ctx=1342, majf=0, minf=1 00:09:34.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.867 issued rwts: total=512,827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.867 job1: (groupid=0, jobs=1): err= 0: pid=552285: Wed Nov 20 08:55:00 2024 00:09:34.867 read: IOPS=17, BW=71.5KiB/s (73.2kB/s)(72.0KiB/1007msec) 00:09:34.867 slat (nsec): min=26001, max=29320, avg=26446.28, stdev=733.42 00:09:34.867 clat (usec): min=40863, max=41522, avg=40992.42, stdev=142.61 00:09:34.867 lat (usec): min=40889, max=41548, avg=41018.87, stdev=142.69 00:09:34.867 clat percentiles (usec): 00:09:34.867 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:34.867 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:34.867 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:34.867 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:34.867 | 99.99th=[41681] 00:09:34.867 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:09:34.867 slat (nsec): min=10140, max=58425, avg=32123.64, stdev=10197.12 00:09:34.867 clat (usec): min=121, max=946, avg=484.12, stdev=141.55 00:09:34.867 lat (usec): min=132, max=980, avg=516.24, stdev=143.07 00:09:34.867 clat percentiles (usec): 00:09:34.867 | 1.00th=[ 163], 5.00th=[ 262], 10.00th=[ 306], 20.00th=[ 351], 00:09:34.867 | 30.00th=[ 420], 40.00th=[ 453], 50.00th=[ 482], 60.00th=[ 519], 00:09:34.867 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 660], 95.00th=[ 717], 00:09:34.867 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[ 947], 99.95th=[ 947], 00:09:34.867 | 99.99th=[ 947] 00:09:34.867 bw ( KiB/s): min= 4096, max= 4096, per=43.91%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.867 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.867 lat (usec) : 250=3.96%, 500=47.92%, 750=42.08%, 1000=2.64% 00:09:34.867 lat (msec) : 50=3.40% 00:09:34.868 cpu : usr=0.40%, sys=1.89%, ctx=531, majf=0, minf=1 00:09:34.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.868 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.868 job2: (groupid=0, jobs=1): err= 0: pid=552286: Wed Nov 20 08:55:00 2024 00:09:34.868 read: IOPS=50, BW=201KiB/s (206kB/s)(208KiB/1036msec) 00:09:34.868 slat (nsec): min=6956, max=61424, avg=18426.38, stdev=10919.08 00:09:34.868 clat (usec): min=203, max=42187, avg=14475.41, stdev=19467.90 00:09:34.868 lat (usec): min=212, max=42202, avg=14493.84, stdev=19473.42 00:09:34.868 clat percentiles (usec): 00:09:34.868 | 1.00th=[ 204], 5.00th=[ 375], 10.00th=[ 482], 20.00th=[ 545], 00:09:34.868 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 881], 00:09:34.868 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:34.868 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:34.868 | 99.99th=[42206] 00:09:34.868 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:09:34.868 slat (nsec): min=10294, max=53601, avg=32487.67, stdev=9251.61 00:09:34.868 clat (usec): min=122, max=840, avg=510.38, stdev=129.75 00:09:34.868 lat (usec): min=133, max=893, avg=542.87, stdev=132.67 00:09:34.868 clat percentiles (usec): 00:09:34.868 | 1.00th=[ 145], 5.00th=[ 281], 10.00th=[ 334], 20.00th=[ 400], 00:09:34.868 | 30.00th=[ 449], 40.00th=[ 494], 50.00th=[ 529], 60.00th=[ 553], 00:09:34.868 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 709], 00:09:34.868 | 99.00th=[ 750], 99.50th=[ 807], 99.90th=[ 840], 99.95th=[ 840], 00:09:34.868 | 99.99th=[ 840] 00:09:34.868 bw ( KiB/s): min= 4096, max= 4096, per=43.91%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.868 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.868 lat (usec) : 250=2.84%, 500=36.70%, 750=55.50%, 1000=1.42% 00:09:34.868 lat (msec) : 2=0.35%, 20=0.18%, 50=3.01% 00:09:34.868 cpu : usr=0.48%, sys=1.93%, ctx=566, majf=0, minf=1 00:09:34.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.868 issued rwts: total=52,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.868 job3: (groupid=0, jobs=1): err= 0: pid=552287: Wed Nov 20 08:55:00 2024 00:09:34.868 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:34.868 slat (nsec): min=6256, max=61900, avg=27346.40, stdev=4059.70 00:09:34.868 clat (usec): min=354, max=41945, avg=1232.26, stdev=3110.64 00:09:34.868 lat (usec): min=361, max=41972, avg=1259.61, stdev=3110.63 00:09:34.868 clat percentiles (usec): 00:09:34.868 | 1.00th=[ 603], 5.00th=[ 824], 10.00th=[ 898], 20.00th=[ 947], 00:09:34.868 | 30.00th=[ 979], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:09:34.868 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:09:34.868 | 99.00th=[ 1221], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:34.868 | 99.99th=[42206] 00:09:34.868 write: IOPS=564, BW=2258KiB/s (2312kB/s)(2260KiB/1001msec); 0 zone resets 00:09:34.868 slat (nsec): min=10045, max=53913, avg=29245.50, stdev=10802.00 00:09:34.868 clat (usec): min=111, max=891, avg=584.32, stdev=144.26 00:09:34.868 lat (usec): min=122, max=927, avg=613.56, stdev=150.14 00:09:34.868 clat percentiles (usec): 00:09:34.868 | 1.00th=[ 147], 5.00th=[ 281], 10.00th=[ 375], 20.00th=[ 478], 00:09:34.868 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:09:34.868 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 766], 00:09:34.868 | 99.00th=[ 816], 99.50th=[ 873], 99.90th=[ 889], 99.95th=[ 889], 00:09:34.868 | 99.99th=[ 889] 00:09:34.868 bw ( KiB/s): min= 4096, max= 4096, per=43.91%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.868 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.868 lat (usec) : 250=1.49%, 500=10.96%, 750=36.86%, 1000=25.16% 00:09:34.868 lat (msec) : 2=25.26%, 50=0.28% 00:09:34.868 cpu : usr=0.90%, sys=3.80%, ctx=1078, majf=0, minf=1 00:09:34.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.868 issued rwts: total=512,565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.868 00:09:34.868 Run status group 0 (all jobs): 00:09:34.868 READ: bw=4224KiB/s (4325kB/s), 71.5KiB/s-2046KiB/s (73.2kB/s-2095kB/s), io=4376KiB (4481kB), run=1001-1036msec 00:09:34.868 WRITE: bw=9328KiB/s (9552kB/s), 1977KiB/s-3305KiB/s (2024kB/s-3384kB/s), io=9664KiB (9896kB), run=1001-1036msec 00:09:34.868 00:09:34.868 Disk stats (read/write): 00:09:34.868 nvme0n1: ios=561/547, merge=0/0, ticks=1308/249, in_queue=1557, util=83.87% 00:09:34.868 nvme0n2: ios=59/512, merge=0/0, ticks=637/256, in_queue=893, util=90.81% 00:09:34.868 nvme0n3: ios=41/512, merge=0/0, ticks=1392/237, in_queue=1629, util=92.06% 00:09:34.868 nvme0n4: ios=424/512, merge=0/0, ticks=868/309, in_queue=1177, util=94.21% 00:09:34.868 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:34.868 [global] 00:09:34.868 thread=1 00:09:34.868 invalidate=1 00:09:34.868 rw=randwrite 00:09:34.868 time_based=1 00:09:34.868 runtime=1 00:09:34.868 ioengine=libaio 00:09:34.868 direct=1 00:09:34.868 bs=4096 00:09:34.868 iodepth=1 00:09:34.868 norandommap=0 00:09:34.868 numjobs=1 00:09:34.868 00:09:34.868 verify_dump=1 00:09:34.868 verify_backlog=512 00:09:34.868 verify_state_save=0 00:09:34.868 do_verify=1 00:09:34.868 verify=crc32c-intel 00:09:34.868 [job0] 00:09:34.868 filename=/dev/nvme0n1 00:09:34.868 [job1] 00:09:34.868 filename=/dev/nvme0n2 00:09:34.868 [job2] 00:09:34.868 filename=/dev/nvme0n3 00:09:34.868 [job3] 00:09:34.868 filename=/dev/nvme0n4 00:09:34.868 Could not set queue depth (nvme0n1) 00:09:34.868 Could not set queue depth (nvme0n2) 00:09:34.868 Could not set queue depth (nvme0n3) 00:09:34.868 Could not set queue depth (nvme0n4) 00:09:35.129 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.129 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.129 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.129 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.129 fio-3.35 00:09:35.129 Starting 4 threads 00:09:36.515 00:09:36.515 job0: (groupid=0, jobs=1): err= 0: pid=552776: Wed Nov 20 08:55:01 2024 00:09:36.515 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:36.515 slat (nsec): min=26250, max=46268, avg=27375.88, stdev=2378.34 00:09:36.515 clat (usec): min=654, max=1149, avg=937.74, stdev=62.79 00:09:36.515 lat (usec): min=682, max=1176, avg=965.12, stdev=62.57 00:09:36.515 clat percentiles (usec): 00:09:36.515 | 1.00th=[ 742], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 889], 00:09:36.515 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 955], 60.00th=[ 963], 00:09:36.515 | 70.00th=[ 971], 80.00th=[ 979], 90.00th=[ 996], 95.00th=[ 1020], 00:09:36.515 | 99.00th=[ 1074], 99.50th=[ 1139], 99.90th=[ 1156], 99.95th=[ 1156], 00:09:36.515 | 99.99th=[ 1156] 00:09:36.515 write: IOPS=826, BW=3305KiB/s (3384kB/s)(3308KiB/1001msec); 0 zone resets 00:09:36.515 slat (nsec): min=9025, max=64451, avg=30302.29, stdev=9519.60 00:09:36.515 clat (usec): min=184, max=939, avg=567.89, stdev=123.09 00:09:36.515 lat (usec): min=196, max=972, avg=598.19, stdev=126.96 00:09:36.515 clat percentiles (usec): 00:09:36.515 | 1.00th=[ 262], 5.00th=[ 351], 10.00th=[ 396], 20.00th=[ 465], 00:09:36.515 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:09:36.515 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 750], 00:09:36.515 | 99.00th=[ 824], 99.50th=[ 840], 99.90th=[ 938], 99.95th=[ 938], 00:09:36.515 | 99.99th=[ 938] 00:09:36.515 bw ( KiB/s): min= 4096, max= 4096, per=44.20%, avg=4096.00, stdev= 0.00, samples=1 00:09:36.515 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:36.515 lat (usec) : 250=0.52%, 500=16.06%, 750=42.49%, 1000=37.49% 00:09:36.515 lat (msec) : 2=3.44% 00:09:36.515 cpu : usr=3.60%, sys=4.40%, ctx=1340, majf=0, minf=1 00:09:36.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.515 issued rwts: total=512,827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.515 job1: (groupid=0, jobs=1): err= 0: pid=552790: Wed Nov 20 08:55:01 2024 00:09:36.515 read: IOPS=18, BW=75.5KiB/s (77.4kB/s)(76.0KiB/1006msec) 00:09:36.515 slat (nsec): min=25775, max=26714, avg=26186.11, stdev=217.62 00:09:36.515 clat (usec): min=40826, max=41011, avg=40957.93, stdev=48.59 00:09:36.515 lat (usec): min=40852, max=41037, avg=40984.12, stdev=48.59 00:09:36.515 clat percentiles (usec): 00:09:36.515 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:36.515 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:36.515 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:36.515 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:36.515 | 99.99th=[41157] 00:09:36.515 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:09:36.515 slat (nsec): min=9221, max=53857, avg=23870.42, stdev=10951.21 00:09:36.515 clat (usec): min=112, max=591, avg=410.52, stdev=79.43 00:09:36.515 lat (usec): min=134, max=611, avg=434.39, stdev=86.28 00:09:36.515 clat percentiles (usec): 00:09:36.515 | 1.00th=[ 239], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 334], 00:09:36.515 | 30.00th=[ 351], 40.00th=[ 383], 50.00th=[ 433], 60.00th=[ 449], 00:09:36.515 | 70.00th=[ 461], 80.00th=[ 478], 90.00th=[ 502], 95.00th=[ 515], 00:09:36.515 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 594], 99.95th=[ 594], 00:09:36.515 | 99.99th=[ 594] 00:09:36.515 bw ( KiB/s): min= 4096, max= 4096, per=44.20%, avg=4096.00, stdev= 0.00, samples=1 00:09:36.515 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:36.515 lat (usec) : 250=1.88%, 500=83.24%, 750=11.30% 00:09:36.515 lat (msec) : 50=3.58% 00:09:36.515 cpu : usr=1.19%, sys=0.70%, ctx=532, majf=0, minf=1 00:09:36.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.515 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.515 job2: (groupid=0, jobs=1): err= 0: pid=552808: Wed Nov 20 08:55:01 2024 00:09:36.515 read: IOPS=16, BW=67.9KiB/s (69.6kB/s)(68.0KiB/1001msec) 00:09:36.515 slat (nsec): min=25855, max=26747, avg=26236.59, stdev=257.45 00:09:36.515 clat (usec): min=972, max=42039, avg=39478.37, stdev=9925.82 00:09:36.515 lat (usec): min=999, max=42065, avg=39504.60, stdev=9925.69 00:09:36.515 clat percentiles (usec): 00:09:36.515 | 1.00th=[ 971], 5.00th=[ 971], 10.00th=[41157], 20.00th=[41681], 00:09:36.515 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:36.515 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:36.515 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:36.515 | 99.99th=[42206] 00:09:36.515 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:36.515 slat (nsec): min=9474, max=67372, avg=28992.59, stdev=8803.48 00:09:36.515 clat (usec): min=225, max=926, avg=605.87, stdev=118.84 00:09:36.515 lat (usec): min=236, max=958, avg=634.86, stdev=122.65 00:09:36.515 clat percentiles (usec): 00:09:36.515 | 1.00th=[ 285], 5.00th=[ 379], 10.00th=[ 457], 20.00th=[ 506], 00:09:36.515 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:09:36.515 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 775], 00:09:36.515 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 930], 99.95th=[ 930], 00:09:36.515 | 99.99th=[ 930] 00:09:36.515 bw ( KiB/s): min= 4096, max= 4096, per=44.20%, avg=4096.00, stdev= 0.00, samples=1 00:09:36.515 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:36.515 lat (usec) : 250=0.19%, 500=17.96%, 750=70.51%, 1000=8.32% 00:09:36.515 lat (msec) : 50=3.02% 00:09:36.515 cpu : usr=0.90%, sys=1.30%, ctx=529, majf=0, minf=2 00:09:36.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.515 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.515 job3: (groupid=0, jobs=1): err= 0: pid=552813: Wed Nov 20 08:55:01 2024 00:09:36.515 read: IOPS=16, BW=66.7KiB/s (68.3kB/s)(68.0KiB/1020msec) 00:09:36.515 slat (nsec): min=10725, max=27289, avg=26045.06, stdev=3951.20 00:09:36.516 clat (usec): min=40994, max=41994, avg=41857.41, stdev=281.43 00:09:36.516 lat (usec): min=41021, max=42021, avg=41883.46, stdev=283.69 00:09:36.516 clat percentiles (usec): 00:09:36.516 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:36.516 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:36.516 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:36.516 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:36.516 | 99.99th=[42206] 00:09:36.516 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:09:36.516 slat (nsec): min=9613, max=52673, avg=26591.99, stdev=11181.38 00:09:36.516 clat (usec): min=171, max=982, avg=566.25, stdev=134.84 00:09:36.516 lat (usec): min=181, max=1018, avg=592.85, stdev=139.91 00:09:36.516 clat percentiles (usec): 00:09:36.516 | 1.00th=[ 227], 5.00th=[ 322], 10.00th=[ 396], 20.00th=[ 461], 00:09:36.516 | 30.00th=[ 498], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 603], 00:09:36.516 | 70.00th=[ 635], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 766], 00:09:36.516 | 99.00th=[ 865], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 979], 00:09:36.516 | 99.99th=[ 979] 00:09:36.516 bw ( KiB/s): min= 4096, max= 4096, per=44.20%, avg=4096.00, stdev= 0.00, samples=1 00:09:36.516 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:36.516 lat (usec) : 250=1.70%, 500=27.60%, 750=61.25%, 1000=6.24% 00:09:36.516 lat (msec) : 50=3.21% 00:09:36.516 cpu : usr=0.98%, sys=0.98%, ctx=532, majf=0, minf=1 00:09:36.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.516 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.516 00:09:36.516 Run status group 0 (all jobs): 00:09:36.516 READ: bw=2216KiB/s (2269kB/s), 66.7KiB/s-2046KiB/s (68.3kB/s-2095kB/s), io=2260KiB (2314kB), run=1001-1020msec 00:09:36.516 WRITE: bw=9267KiB/s (9489kB/s), 2008KiB/s-3305KiB/s (2056kB/s-3384kB/s), io=9452KiB (9679kB), run=1001-1020msec 00:09:36.516 00:09:36.516 Disk stats (read/write): 00:09:36.516 nvme0n1: ios=540/561, merge=0/0, ticks=1116/239, in_queue=1355, util=99.50% 00:09:36.516 nvme0n2: ios=49/512, merge=0/0, ticks=746/204, in_queue=950, util=100.00% 00:09:36.516 nvme0n3: ios=40/512, merge=0/0, ticks=1275/305, in_queue=1580, util=99.16% 00:09:36.516 nvme0n4: ios=36/512, merge=0/0, ticks=754/286, in_queue=1040, util=97.12% 00:09:36.516 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:36.516 [global] 00:09:36.516 thread=1 00:09:36.516 invalidate=1 00:09:36.516 rw=write 00:09:36.516 time_based=1 00:09:36.516 runtime=1 00:09:36.516 ioengine=libaio 00:09:36.516 direct=1 00:09:36.516 bs=4096 00:09:36.516 iodepth=128 00:09:36.516 norandommap=0 00:09:36.516 numjobs=1 00:09:36.516 00:09:36.516 verify_dump=1 00:09:36.516 verify_backlog=512 00:09:36.516 verify_state_save=0 00:09:36.516 do_verify=1 00:09:36.516 verify=crc32c-intel 00:09:36.516 [job0] 00:09:36.516 filename=/dev/nvme0n1 00:09:36.516 [job1] 00:09:36.516 filename=/dev/nvme0n2 00:09:36.516 [job2] 00:09:36.516 filename=/dev/nvme0n3 00:09:36.516 [job3] 00:09:36.516 filename=/dev/nvme0n4 00:09:36.516 Could not set queue depth (nvme0n1) 00:09:36.516 Could not set queue depth (nvme0n2) 00:09:36.516 Could not set queue depth (nvme0n3) 00:09:36.516 Could not set queue depth (nvme0n4) 00:09:36.776 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.776 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.776 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.776 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.776 fio-3.35 00:09:36.776 Starting 4 threads 00:09:38.160 00:09:38.160 job0: (groupid=0, jobs=1): err= 0: pid=553267: Wed Nov 20 08:55:03 2024 00:09:38.160 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:09:38.160 slat (nsec): min=954, max=11890k, avg=73088.32, stdev=551637.33 00:09:38.160 clat (usec): min=3783, max=42275, avg=9688.10, stdev=3925.42 00:09:38.160 lat (usec): min=3791, max=45096, avg=9761.18, stdev=3964.49 00:09:38.160 clat percentiles (usec): 00:09:38.160 | 1.00th=[ 3851], 5.00th=[ 5276], 10.00th=[ 6259], 20.00th=[ 6652], 00:09:38.160 | 30.00th=[ 6980], 40.00th=[ 7898], 50.00th=[ 8356], 60.00th=[10028], 00:09:38.160 | 70.00th=[11076], 80.00th=[12125], 90.00th=[15139], 95.00th=[16909], 00:09:38.160 | 99.00th=[21627], 99.50th=[24773], 99.90th=[42206], 99.95th=[42206], 00:09:38.160 | 99.99th=[42206] 00:09:38.160 write: IOPS=6432, BW=25.1MiB/s (26.3MB/s)(25.3MiB/1007msec); 0 zone resets 00:09:38.160 slat (nsec): min=1724, max=10041k, avg=69122.11, stdev=394984.42 00:09:38.160 clat (usec): min=389, max=26286, avg=10511.52, stdev=5375.23 00:09:38.160 lat (usec): min=406, max=26292, avg=10580.64, stdev=5413.24 00:09:38.160 clat percentiles (usec): 00:09:38.160 | 1.00th=[ 1221], 5.00th=[ 3589], 10.00th=[ 4883], 20.00th=[ 5538], 00:09:38.160 | 30.00th=[ 6259], 40.00th=[ 7832], 50.00th=[10552], 60.00th=[11469], 00:09:38.160 | 70.00th=[12649], 80.00th=[14091], 90.00th=[18744], 95.00th=[21103], 00:09:38.160 | 99.00th=[23987], 99.50th=[25560], 99.90th=[26346], 99.95th=[26346], 00:09:38.160 | 99.99th=[26346] 00:09:38.160 bw ( KiB/s): min=24526, max=26232, per=27.70%, avg=25379.00, stdev=1206.32, samples=2 00:09:38.160 iops : min= 6131, max= 6558, avg=6344.50, stdev=301.93, samples=2 00:09:38.160 lat (usec) : 500=0.06%, 750=0.13%, 1000=0.29% 00:09:38.160 lat (msec) : 2=0.30%, 4=2.96%, 10=49.68%, 20=41.37%, 50=5.21% 00:09:38.160 cpu : usr=5.07%, sys=6.56%, ctx=545, majf=0, minf=1 00:09:38.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:38.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.160 issued rwts: total=6144,6478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.160 job1: (groupid=0, jobs=1): err= 0: pid=553286: Wed Nov 20 08:55:03 2024 00:09:38.160 read: IOPS=4961, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1005msec) 00:09:38.160 slat (nsec): min=936, max=11232k, avg=91905.94, stdev=631554.23 00:09:38.160 clat (usec): min=2331, max=30420, avg=11498.80, stdev=3939.58 00:09:38.160 lat (usec): min=3034, max=30422, avg=11590.70, stdev=3981.35 00:09:38.160 clat percentiles (usec): 00:09:38.160 | 1.00th=[ 4686], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 8455], 00:09:38.160 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10552], 60.00th=[11207], 00:09:38.160 | 70.00th=[12518], 80.00th=[14353], 90.00th=[17695], 95.00th=[20055], 00:09:38.160 | 99.00th=[23462], 99.50th=[24773], 99.90th=[25297], 99.95th=[25560], 00:09:38.160 | 99.99th=[30540] 00:09:38.160 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:09:38.160 slat (nsec): min=1636, max=14717k, avg=93791.54, stdev=611281.16 00:09:38.160 clat (usec): min=1245, max=56655, avg=13651.75, stdev=9295.54 00:09:38.160 lat (usec): min=1255, max=56661, avg=13745.54, stdev=9362.09 00:09:38.160 clat percentiles (usec): 00:09:38.160 | 1.00th=[ 2343], 5.00th=[ 4015], 10.00th=[ 5080], 20.00th=[ 7308], 00:09:38.160 | 30.00th=[ 8455], 40.00th=[10683], 50.00th=[11338], 60.00th=[12387], 00:09:38.160 | 70.00th=[13435], 80.00th=[17695], 90.00th=[25560], 95.00th=[34341], 00:09:38.160 | 99.00th=[51643], 99.50th=[52691], 99.90th=[56886], 99.95th=[56886], 00:09:38.160 | 99.99th=[56886] 00:09:38.160 bw ( KiB/s): min=16384, max=24576, per=22.35%, avg=20480.00, stdev=5792.62, samples=2 00:09:38.160 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:09:38.160 lat (msec) : 2=0.29%, 4=2.40%, 10=33.79%, 20=52.86%, 50=10.04% 00:09:38.160 lat (msec) : 100=0.61% 00:09:38.160 cpu : usr=3.19%, sys=5.78%, ctx=510, majf=0, minf=2 00:09:38.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:38.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.160 issued rwts: total=4986,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.160 job2: (groupid=0, jobs=1): err= 0: pid=553307: Wed Nov 20 08:55:03 2024 00:09:38.160 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:09:38.160 slat (nsec): min=928, max=11776k, avg=93365.56, stdev=622295.64 00:09:38.160 clat (usec): min=2024, max=25906, avg=11786.85, stdev=3555.32 00:09:38.160 lat (usec): min=2050, max=25913, avg=11880.21, stdev=3599.19 00:09:38.160 clat percentiles (usec): 00:09:38.160 | 1.00th=[ 2704], 5.00th=[ 7177], 10.00th=[ 8291], 20.00th=[ 9110], 00:09:38.160 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[11076], 60.00th=[12125], 00:09:38.160 | 70.00th=[13304], 80.00th=[14091], 90.00th=[16319], 95.00th=[18482], 00:09:38.160 | 99.00th=[24249], 99.50th=[24511], 99.90th=[24511], 99.95th=[25560], 00:09:38.160 | 99.99th=[25822] 00:09:38.160 write: IOPS=5300, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1005msec); 0 zone resets 00:09:38.160 slat (nsec): min=1572, max=10146k, avg=92735.84, stdev=527103.43 00:09:38.160 clat (usec): min=939, max=29822, avg=12612.59, stdev=5664.07 00:09:38.160 lat (usec): min=972, max=29824, avg=12705.32, stdev=5709.68 00:09:38.160 clat percentiles (usec): 00:09:38.160 | 1.00th=[ 4817], 5.00th=[ 6325], 10.00th=[ 7701], 20.00th=[ 8160], 00:09:38.160 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11600], 00:09:38.160 | 70.00th=[13960], 80.00th=[18744], 90.00th=[21627], 95.00th=[23725], 00:09:38.160 | 99.00th=[28181], 99.50th=[28967], 99.90th=[29754], 99.95th=[29754], 00:09:38.160 | 99.99th=[29754] 00:09:38.160 bw ( KiB/s): min=20384, max=21216, per=22.70%, avg=20800.00, stdev=588.31, samples=2 00:09:38.160 iops : min= 5096, max= 5304, avg=5200.00, stdev=147.08, samples=2 00:09:38.160 lat (usec) : 1000=0.01% 00:09:38.160 lat (msec) : 2=0.09%, 4=0.74%, 10=39.67%, 20=50.51%, 50=8.99% 00:09:38.160 cpu : usr=3.69%, sys=4.68%, ctx=500, majf=0, minf=2 00:09:38.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:38.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.160 issued rwts: total=5120,5327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.160 job3: (groupid=0, jobs=1): err= 0: pid=553314: Wed Nov 20 08:55:03 2024 00:09:38.160 read: IOPS=5635, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1004msec) 00:09:38.160 slat (nsec): min=972, max=11046k, avg=82091.10, stdev=555546.40 00:09:38.160 clat (usec): min=2468, max=25245, avg=11041.96, stdev=3358.46 00:09:38.160 lat (usec): min=3356, max=25270, avg=11124.05, stdev=3393.19 00:09:38.160 clat percentiles (usec): 00:09:38.160 | 1.00th=[ 4490], 5.00th=[ 6456], 10.00th=[ 8094], 20.00th=[ 8586], 00:09:38.160 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[11076], 00:09:38.160 | 70.00th=[12387], 80.00th=[13566], 90.00th=[16057], 95.00th=[17433], 00:09:38.160 | 99.00th=[20579], 99.50th=[20841], 99.90th=[22676], 99.95th=[24249], 00:09:38.160 | 99.99th=[25297] 00:09:38.160 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:09:38.160 slat (nsec): min=1646, max=16204k, avg=80119.90, stdev=546140.05 00:09:38.160 clat (usec): min=1359, max=31644, avg=10524.00, stdev=4542.95 00:09:38.160 lat (usec): min=1386, max=31646, avg=10604.12, stdev=4585.66 00:09:38.160 clat percentiles (usec): 00:09:38.160 | 1.00th=[ 3589], 5.00th=[ 4883], 10.00th=[ 5604], 20.00th=[ 8029], 00:09:38.160 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9634], 00:09:38.160 | 70.00th=[11207], 80.00th=[13566], 90.00th=[17433], 95.00th=[20579], 00:09:38.160 | 99.00th=[25035], 99.50th=[25822], 99.90th=[30016], 99.95th=[31589], 00:09:38.160 | 99.99th=[31589] 00:09:38.160 bw ( KiB/s): min=23504, max=24840, per=26.38%, avg=24172.00, stdev=944.69, samples=2 00:09:38.160 iops : min= 5876, max= 6210, avg=6043.00, stdev=236.17, samples=2 00:09:38.160 lat (msec) : 2=0.03%, 4=0.81%, 10=54.68%, 20=40.98%, 50=3.50% 00:09:38.160 cpu : usr=4.99%, sys=5.78%, ctx=396, majf=0, minf=1 00:09:38.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:38.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.160 issued rwts: total=5658,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.160 00:09:38.160 Run status group 0 (all jobs): 00:09:38.160 READ: bw=85.0MiB/s (89.1MB/s), 19.4MiB/s-23.8MiB/s (20.3MB/s-25.0MB/s), io=85.6MiB (89.7MB), run=1004-1007msec 00:09:38.160 WRITE: bw=89.5MiB/s (93.8MB/s), 19.9MiB/s-25.1MiB/s (20.9MB/s-26.3MB/s), io=90.1MiB (94.5MB), run=1004-1007msec 00:09:38.160 00:09:38.160 Disk stats (read/write): 00:09:38.160 nvme0n1: ios=5144/5158, merge=0/0, ticks=50267/49668, in_queue=99935, util=95.39% 00:09:38.161 nvme0n2: ios=4182/4608, merge=0/0, ticks=42213/44052, in_queue=86265, util=98.47% 00:09:38.161 nvme0n3: ios=4452/4608, merge=0/0, ticks=28568/28275, in_queue=56843, util=88.38% 00:09:38.161 nvme0n4: ios=4628/4908, merge=0/0, ticks=30148/34382, in_queue=64530, util=97.97% 00:09:38.161 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:38.161 [global] 00:09:38.161 thread=1 00:09:38.161 invalidate=1 00:09:38.161 rw=randwrite 00:09:38.161 time_based=1 00:09:38.161 runtime=1 00:09:38.161 ioengine=libaio 00:09:38.161 direct=1 00:09:38.161 bs=4096 00:09:38.161 iodepth=128 00:09:38.161 norandommap=0 00:09:38.161 numjobs=1 00:09:38.161 00:09:38.161 verify_dump=1 00:09:38.161 verify_backlog=512 00:09:38.161 verify_state_save=0 00:09:38.161 do_verify=1 00:09:38.161 verify=crc32c-intel 00:09:38.161 [job0] 00:09:38.161 filename=/dev/nvme0n1 00:09:38.161 [job1] 00:09:38.161 filename=/dev/nvme0n2 00:09:38.161 [job2] 00:09:38.161 filename=/dev/nvme0n3 00:09:38.161 [job3] 00:09:38.161 filename=/dev/nvme0n4 00:09:38.161 Could not set queue depth (nvme0n1) 00:09:38.161 Could not set queue depth (nvme0n2) 00:09:38.161 Could not set queue depth (nvme0n3) 00:09:38.161 Could not set queue depth (nvme0n4) 00:09:38.423 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:38.423 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:38.423 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:38.423 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:38.423 fio-3.35 00:09:38.423 Starting 4 threads 00:09:39.808 00:09:39.809 job0: (groupid=0, jobs=1): err= 0: pid=553788: Wed Nov 20 08:55:05 2024 00:09:39.809 read: IOPS=6491, BW=25.4MiB/s (26.6MB/s)(25.4MiB/1002msec) 00:09:39.809 slat (nsec): min=897, max=8034.9k, avg=80051.59, stdev=446114.08 00:09:39.809 clat (usec): min=1232, max=26081, avg=10332.33, stdev=3761.95 00:09:39.809 lat (usec): min=1781, max=26090, avg=10412.38, stdev=3800.78 00:09:39.809 clat percentiles (usec): 00:09:39.809 | 1.00th=[ 4490], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7570], 00:09:39.809 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 9896], 00:09:39.809 | 70.00th=[12387], 80.00th=[13435], 90.00th=[15008], 95.00th=[17433], 00:09:39.809 | 99.00th=[22414], 99.50th=[22414], 99.90th=[25822], 99.95th=[26084], 00:09:39.809 | 99.99th=[26084] 00:09:39.809 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:09:39.809 slat (nsec): min=1499, max=8922.6k, avg=66995.96, stdev=396204.33 00:09:39.809 clat (usec): min=1089, max=31355, avg=8999.97, stdev=3279.82 00:09:39.809 lat (usec): min=1099, max=31357, avg=9066.97, stdev=3299.38 00:09:39.809 clat percentiles (usec): 00:09:39.809 | 1.00th=[ 4228], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6652], 00:09:39.809 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8356], 60.00th=[ 9372], 00:09:39.809 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11863], 95.00th=[14091], 00:09:39.809 | 99.00th=[25035], 99.50th=[26084], 99.90th=[31327], 99.95th=[31327], 00:09:39.809 | 99.99th=[31327] 00:09:39.809 bw ( KiB/s): min=25724, max=25724, per=27.19%, avg=25724.00, stdev= 0.00, samples=1 00:09:39.809 iops : min= 6431, max= 6431, avg=6431.00, stdev= 0.00, samples=1 00:09:39.809 lat (msec) : 2=0.22%, 4=0.24%, 10=65.34%, 20=31.66%, 50=2.54% 00:09:39.809 cpu : usr=3.90%, sys=6.79%, ctx=639, majf=0, minf=1 00:09:39.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:39.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.809 issued rwts: total=6504,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.809 job1: (groupid=0, jobs=1): err= 0: pid=553797: Wed Nov 20 08:55:05 2024 00:09:39.809 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:09:39.809 slat (nsec): min=884, max=26248k, avg=94420.03, stdev=774355.93 00:09:39.809 clat (usec): min=2923, max=67725, avg=12042.97, stdev=10492.60 00:09:39.809 lat (usec): min=2930, max=67750, avg=12137.39, stdev=10585.84 00:09:39.809 clat percentiles (usec): 00:09:39.809 | 1.00th=[ 4293], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 6915], 00:09:39.809 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7832], 00:09:39.809 | 70.00th=[ 9372], 80.00th=[13042], 90.00th=[29754], 95.00th=[35914], 00:09:39.809 | 99.00th=[55837], 99.50th=[55837], 99.90th=[56886], 99.95th=[61080], 00:09:39.809 | 99.99th=[67634] 00:09:39.809 write: IOPS=6408, BW=25.0MiB/s (26.2MB/s)(25.1MiB/1002msec); 0 zone resets 00:09:39.809 slat (nsec): min=1566, max=10050k, avg=58615.88, stdev=403866.87 00:09:39.809 clat (usec): min=675, max=56685, avg=8274.55, stdev=4819.06 00:09:39.809 lat (usec): min=692, max=56693, avg=8333.16, stdev=4833.30 00:09:39.809 clat percentiles (usec): 00:09:39.809 | 1.00th=[ 3720], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 6521], 00:09:39.809 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7701], 00:09:39.809 | 70.00th=[ 8029], 80.00th=[ 8356], 90.00th=[11207], 95.00th=[13960], 00:09:39.809 | 99.00th=[37487], 99.50th=[44827], 99.90th=[48497], 99.95th=[48497], 00:09:39.809 | 99.99th=[56886] 00:09:39.809 bw ( KiB/s): min=25876, max=25876, per=27.35%, avg=25876.00, stdev= 0.00, samples=1 00:09:39.809 iops : min= 6469, max= 6469, avg=6469.00, stdev= 0.00, samples=1 00:09:39.809 lat (usec) : 750=0.04% 00:09:39.809 lat (msec) : 2=0.10%, 4=1.58%, 10=80.09%, 20=9.50%, 50=7.67% 00:09:39.809 lat (msec) : 100=1.02% 00:09:39.809 cpu : usr=4.60%, sys=6.99%, ctx=382, majf=0, minf=2 00:09:39.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:39.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.809 issued rwts: total=6144,6421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.809 job2: (groupid=0, jobs=1): err= 0: pid=553815: Wed Nov 20 08:55:05 2024 00:09:39.809 read: IOPS=4288, BW=16.8MiB/s (17.6MB/s)(17.5MiB/1045msec) 00:09:39.809 slat (nsec): min=957, max=12531k, avg=114115.60, stdev=660495.29 00:09:39.809 clat (usec): min=5519, max=57568, avg=15490.32, stdev=8877.31 00:09:39.809 lat (usec): min=5527, max=63051, avg=15604.43, stdev=8915.54 00:09:39.809 clat percentiles (usec): 00:09:39.809 | 1.00th=[ 5997], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9896], 00:09:39.809 | 30.00th=[10814], 40.00th=[12387], 50.00th=[13829], 60.00th=[15008], 00:09:39.809 | 70.00th=[16188], 80.00th=[18220], 90.00th=[20841], 95.00th=[33817], 00:09:39.809 | 99.00th=[52691], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:09:39.809 | 99.99th=[57410] 00:09:39.809 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:09:39.809 slat (nsec): min=1582, max=16390k, avg=100842.73, stdev=638606.16 00:09:39.809 clat (usec): min=4981, max=35540, avg=13449.00, stdev=5441.61 00:09:39.809 lat (usec): min=4991, max=35550, avg=13549.85, stdev=5477.62 00:09:39.809 clat percentiles (usec): 00:09:39.809 | 1.00th=[ 6063], 5.00th=[ 7504], 10.00th=[ 8356], 20.00th=[ 9765], 00:09:39.809 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11731], 60.00th=[12518], 00:09:39.809 | 70.00th=[14091], 80.00th=[17171], 90.00th=[22676], 95.00th=[24773], 00:09:39.809 | 99.00th=[31589], 99.50th=[32113], 99.90th=[34341], 99.95th=[34341], 00:09:39.809 | 99.99th=[35390] 00:09:39.809 bw ( KiB/s): min=16384, max=20480, per=19.48%, avg=18432.00, stdev=2896.31, samples=2 00:09:39.809 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:39.809 lat (msec) : 10=23.25%, 20=62.83%, 50=12.77%, 100=1.14% 00:09:39.809 cpu : usr=3.16%, sys=4.12%, ctx=413, majf=0, minf=1 00:09:39.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:39.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.809 issued rwts: total=4481,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.809 job3: (groupid=0, jobs=1): err= 0: pid=553823: Wed Nov 20 08:55:05 2024 00:09:39.809 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:09:39.809 slat (nsec): min=967, max=9543.4k, avg=66054.62, stdev=490306.42 00:09:39.809 clat (usec): min=1160, max=21550, avg=9135.79, stdev=2386.43 00:09:39.809 lat (usec): min=1188, max=22199, avg=9201.84, stdev=2417.14 00:09:39.809 clat percentiles (usec): 00:09:39.809 | 1.00th=[ 3064], 5.00th=[ 5735], 10.00th=[ 6980], 20.00th=[ 7570], 00:09:39.809 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:09:39.809 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11731], 95.00th=[13042], 00:09:39.809 | 99.00th=[16909], 99.50th=[18482], 99.90th=[21627], 99.95th=[21627], 00:09:39.809 | 99.99th=[21627] 00:09:39.809 write: IOPS=6981, BW=27.3MiB/s (28.6MB/s)(27.5MiB/1007msec); 0 zone resets 00:09:39.809 slat (nsec): min=1556, max=8203.1k, avg=65341.03, stdev=464090.91 00:09:39.809 clat (usec): min=1322, max=25676, avg=9533.51, stdev=4326.45 00:09:39.809 lat (usec): min=1361, max=25681, avg=9598.85, stdev=4358.73 00:09:39.809 clat percentiles (usec): 00:09:39.809 | 1.00th=[ 2573], 5.00th=[ 4293], 10.00th=[ 5276], 20.00th=[ 6325], 00:09:39.809 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8979], 00:09:39.809 | 70.00th=[10028], 80.00th=[11731], 90.00th=[16057], 95.00th=[19268], 00:09:39.809 | 99.00th=[23200], 99.50th=[23725], 99.90th=[25035], 99.95th=[25560], 00:09:39.809 | 99.99th=[25560] 00:09:39.809 bw ( KiB/s): min=27392, max=27776, per=29.16%, avg=27584.00, stdev=271.53, samples=2 00:09:39.809 iops : min= 6848, max= 6944, avg=6896.00, stdev=67.88, samples=2 00:09:39.809 lat (msec) : 2=0.45%, 4=3.00%, 10=66.66%, 20=27.43%, 50=2.46% 00:09:39.809 cpu : usr=5.77%, sys=7.16%, ctx=473, majf=0, minf=1 00:09:39.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:39.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.810 issued rwts: total=6656,7030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.810 00:09:39.810 Run status group 0 (all jobs): 00:09:39.810 READ: bw=88.9MiB/s (93.2MB/s), 16.8MiB/s-25.8MiB/s (17.6MB/s-27.1MB/s), io=92.9MiB (97.4MB), run=1002-1045msec 00:09:39.810 WRITE: bw=92.4MiB/s (96.9MB/s), 17.2MiB/s-27.3MiB/s (18.1MB/s-28.6MB/s), io=96.5MiB (101MB), run=1002-1045msec 00:09:39.810 00:09:39.810 Disk stats (read/write): 00:09:39.810 nvme0n1: ios=5272/5632, merge=0/0, ticks=21731/18482, in_queue=40213, util=97.80% 00:09:39.810 nvme0n2: ios=5166/5314, merge=0/0, ticks=31212/22031, in_queue=53243, util=96.43% 00:09:39.810 nvme0n3: ios=3861/4096, merge=0/0, ticks=18313/17587, in_queue=35900, util=98.73% 00:09:39.810 nvme0n4: ios=5632/5807, merge=0/0, ticks=49210/51810, in_queue=101020, util=89.43% 00:09:39.810 08:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:39.810 08:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=553908 00:09:39.810 08:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:39.810 08:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:39.810 [global] 00:09:39.810 thread=1 00:09:39.810 invalidate=1 00:09:39.810 rw=read 00:09:39.810 time_based=1 00:09:39.810 runtime=10 00:09:39.810 ioengine=libaio 00:09:39.810 direct=1 00:09:39.810 bs=4096 00:09:39.810 iodepth=1 00:09:39.810 norandommap=1 00:09:39.810 numjobs=1 00:09:39.810 00:09:39.810 [job0] 00:09:39.810 filename=/dev/nvme0n1 00:09:39.810 [job1] 00:09:39.810 filename=/dev/nvme0n2 00:09:39.810 [job2] 00:09:39.810 filename=/dev/nvme0n3 00:09:39.810 [job3] 00:09:39.810 filename=/dev/nvme0n4 00:09:39.810 Could not set queue depth (nvme0n1) 00:09:39.810 Could not set queue depth (nvme0n2) 00:09:39.810 Could not set queue depth (nvme0n3) 00:09:39.810 Could not set queue depth (nvme0n4) 00:09:40.071 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.071 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.071 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.071 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.071 fio-3.35 00:09:40.071 Starting 4 threads 00:09:42.652 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:42.912 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:09:42.912 fio: pid=554340, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:42.912 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:43.172 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1667072, buflen=4096 00:09:43.172 fio: pid=554333, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:43.172 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.172 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:43.172 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11673600, buflen=4096 00:09:43.172 fio: pid=554297, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:43.173 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.173 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:43.435 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.435 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:43.435 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3670016, buflen=4096 00:09:43.435 fio: pid=554312, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:43.435 00:09:43.435 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=554297: Wed Nov 20 08:55:08 2024 00:09:43.435 read: IOPS=960, BW=3840KiB/s (3932kB/s)(11.1MiB/2969msec) 00:09:43.435 slat (usec): min=7, max=21693, avg=39.16, stdev=483.47 00:09:43.435 clat (usec): min=531, max=41761, avg=988.35, stdev=1337.02 00:09:43.435 lat (usec): min=558, max=41788, avg=1027.52, stdev=1421.48 00:09:43.435 clat percentiles (usec): 00:09:43.435 | 1.00th=[ 717], 5.00th=[ 775], 10.00th=[ 832], 20.00th=[ 889], 00:09:43.435 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 963], 00:09:43.435 | 70.00th=[ 979], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1057], 00:09:43.435 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[41157], 99.95th=[41157], 00:09:43.435 | 99.99th=[41681] 00:09:43.435 bw ( KiB/s): min= 2984, max= 4160, per=73.29%, avg=3878.40, stdev=504.46, samples=5 00:09:43.435 iops : min= 746, max= 1040, avg=969.60, stdev=126.11, samples=5 00:09:43.435 lat (usec) : 750=2.70%, 1000=77.31% 00:09:43.435 lat (msec) : 2=19.82%, 20=0.04%, 50=0.11% 00:09:43.435 cpu : usr=0.57%, sys=3.47%, ctx=2856, majf=0, minf=2 00:09:43.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.435 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.435 issued rwts: total=2851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.435 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=554312: Wed Nov 20 08:55:08 2024 00:09:43.435 read: IOPS=281, BW=1125KiB/s (1152kB/s)(3584KiB/3187msec) 00:09:43.435 slat (usec): min=7, max=15524, avg=60.47, stdev=643.80 00:09:43.435 clat (usec): min=589, max=42041, avg=3465.49, stdev=9737.49 00:09:43.435 lat (usec): min=675, max=56727, avg=3526.00, stdev=9870.67 00:09:43.435 clat percentiles (usec): 00:09:43.435 | 1.00th=[ 758], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 930], 00:09:43.435 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:09:43.435 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[41157], 00:09:43.435 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:43.435 | 99.99th=[42206] 00:09:43.435 bw ( KiB/s): min= 96, max= 3640, per=22.45%, avg=1188.33, stdev=1685.88, samples=6 00:09:43.435 iops : min= 24, max= 910, avg=297.00, stdev=421.53, samples=6 00:09:43.435 lat (usec) : 750=0.78%, 1000=52.06% 00:09:43.435 lat (msec) : 2=40.91%, 50=6.13% 00:09:43.435 cpu : usr=0.28%, sys=0.88%, ctx=904, majf=0, minf=2 00:09:43.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.435 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.435 issued rwts: total=897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.435 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=554333: Wed Nov 20 08:55:08 2024 00:09:43.435 read: IOPS=146, BW=585KiB/s (599kB/s)(1628KiB/2784msec) 00:09:43.435 slat (usec): min=3, max=14748, avg=96.75, stdev=1001.27 00:09:43.435 clat (usec): min=656, max=43417, avg=6683.47, stdev=13995.72 00:09:43.435 lat (usec): min=663, max=43426, avg=6780.40, stdev=14003.57 00:09:43.435 clat percentiles (usec): 00:09:43.435 | 1.00th=[ 824], 5.00th=[ 938], 10.00th=[ 996], 20.00th=[ 1045], 00:09:43.435 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1156], 00:09:43.435 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[41157], 95.00th=[42206], 00:09:43.435 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:09:43.435 | 99.99th=[43254] 00:09:43.435 bw ( KiB/s): min= 96, max= 1040, per=10.34%, avg=547.20, stdev=421.37, samples=5 00:09:43.435 iops : min= 24, max= 260, avg=136.80, stdev=105.34, samples=5 00:09:43.435 lat (usec) : 750=0.49%, 1000=10.29% 00:09:43.435 lat (msec) : 2=75.25%, 50=13.73% 00:09:43.435 cpu : usr=0.18%, sys=0.43%, ctx=412, majf=0, minf=2 00:09:43.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.435 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.435 issued rwts: total=408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.435 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=554340: Wed Nov 20 08:55:08 2024 00:09:43.435 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(252KiB/2608msec) 00:09:43.435 slat (nsec): min=8901, max=36902, avg=27374.08, stdev=2774.01 00:09:43.435 clat (usec): min=664, max=43040, avg=41021.44, stdev=5187.55 00:09:43.435 lat (usec): min=701, max=43071, avg=41048.81, stdev=5186.40 00:09:43.435 clat percentiles (usec): 00:09:43.435 | 1.00th=[ 668], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:43.435 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:43.435 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:43.435 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:43.435 | 99.99th=[43254] 00:09:43.435 bw ( KiB/s): min= 88, max= 104, per=1.81%, avg=96.00, stdev= 5.66, samples=5 00:09:43.435 iops : min= 22, max= 26, avg=24.00, stdev= 1.41, samples=5 00:09:43.435 lat (usec) : 750=1.56% 00:09:43.435 lat (msec) : 50=96.88% 00:09:43.435 cpu : usr=0.00%, sys=0.15%, ctx=64, majf=0, minf=1 00:09:43.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.435 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.435 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.435 00:09:43.435 Run status group 0 (all jobs): 00:09:43.435 READ: bw=5291KiB/s (5418kB/s), 96.6KiB/s-3840KiB/s (98.9kB/s-3932kB/s), io=16.5MiB (17.3MB), run=2608-3187msec 00:09:43.435 00:09:43.435 Disk stats (read/write): 00:09:43.435 nvme0n1: ios=2769/0, merge=0/0, ticks=3451/0, in_queue=3451, util=97.70% 00:09:43.435 nvme0n2: ios=925/0, merge=0/0, ticks=3413/0, in_queue=3413, util=98.48% 00:09:43.435 nvme0n3: ios=406/0, merge=0/0, ticks=3515/0, in_queue=3515, util=100.00% 00:09:43.435 nvme0n4: ios=63/0, merge=0/0, ticks=2586/0, in_queue=2586, util=96.42% 00:09:43.696 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.696 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:43.954 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.954 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:43.954 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.954 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:44.214 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:44.214 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 553908 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:44.474 nvmf hotplug test: fio failed as expected 00:09:44.474 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.733 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:44.733 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.734 rmmod nvme_tcp 00:09:44.734 rmmod nvme_fabrics 00:09:44.734 rmmod nvme_keyring 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 550369 ']' 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 550369 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 550369 ']' 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 550369 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.734 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 550369 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 550369' 00:09:44.994 killing process with pid 550369 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 550369 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 550369 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.994 08:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.999 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:46.999 00:09:46.999 real 0m29.422s 00:09:46.999 user 2m29.051s 00:09:46.999 sys 0m9.509s 00:09:46.999 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.999 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.999 ************************************ 00:09:46.999 END TEST nvmf_fio_target 00:09:46.999 ************************************ 00:09:46.999 08:55:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:47.000 08:55:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.000 08:55:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.000 08:55:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.260 ************************************ 00:09:47.260 START TEST nvmf_bdevio 00:09:47.260 ************************************ 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:47.260 * Looking for test storage... 00:09:47.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.260 --rc genhtml_branch_coverage=1 00:09:47.260 --rc genhtml_function_coverage=1 00:09:47.260 --rc genhtml_legend=1 00:09:47.260 --rc geninfo_all_blocks=1 00:09:47.260 --rc geninfo_unexecuted_blocks=1 00:09:47.260 00:09:47.260 ' 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.260 --rc genhtml_branch_coverage=1 00:09:47.260 --rc genhtml_function_coverage=1 00:09:47.260 --rc genhtml_legend=1 00:09:47.260 --rc geninfo_all_blocks=1 00:09:47.260 --rc geninfo_unexecuted_blocks=1 00:09:47.260 00:09:47.260 ' 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.260 --rc genhtml_branch_coverage=1 00:09:47.260 --rc genhtml_function_coverage=1 00:09:47.260 --rc genhtml_legend=1 00:09:47.260 --rc geninfo_all_blocks=1 00:09:47.260 --rc geninfo_unexecuted_blocks=1 00:09:47.260 00:09:47.260 ' 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.260 --rc genhtml_branch_coverage=1 00:09:47.260 --rc genhtml_function_coverage=1 00:09:47.260 --rc genhtml_legend=1 00:09:47.260 --rc geninfo_all_blocks=1 00:09:47.260 --rc geninfo_unexecuted_blocks=1 00:09:47.260 00:09:47.260 ' 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.260 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.261 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.522 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:47.522 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:47.522 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.522 08:55:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:55.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:55.663 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:55.663 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.663 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:55.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.664 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:55.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:09:55.664 00:09:55.664 --- 10.0.0.2 ping statistics --- 00:09:55.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.664 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:09:55.664 00:09:55.664 --- 10.0.0.1 ping statistics --- 00:09:55.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.664 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=559428 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 559428 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 559428 ']' 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.664 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.664 [2024-11-20 08:55:20.282005] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:09:55.664 [2024-11-20 08:55:20.282073] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.664 [2024-11-20 08:55:20.382401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.664 [2024-11-20 08:55:20.435079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.664 [2024-11-20 08:55:20.435131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.664 [2024-11-20 08:55:20.435141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.664 [2024-11-20 08:55:20.435148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.664 [2024-11-20 08:55:20.435154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.664 [2024-11-20 08:55:20.437204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:55.664 [2024-11-20 08:55:20.437418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:55.664 [2024-11-20 08:55:20.437576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:55.664 [2024-11-20 08:55:20.437577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.664 [2024-11-20 08:55:21.161014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.664 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.927 Malloc0 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.927 [2024-11-20 08:55:21.241041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:55.927 { 00:09:55.927 "params": { 00:09:55.927 "name": "Nvme$subsystem", 00:09:55.927 "trtype": "$TEST_TRANSPORT", 00:09:55.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:55.927 "adrfam": "ipv4", 00:09:55.927 "trsvcid": "$NVMF_PORT", 00:09:55.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:55.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:55.927 "hdgst": ${hdgst:-false}, 00:09:55.927 "ddgst": ${ddgst:-false} 00:09:55.927 }, 00:09:55.927 "method": "bdev_nvme_attach_controller" 00:09:55.927 } 00:09:55.927 EOF 00:09:55.927 )") 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:55.927 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:55.927 "params": { 00:09:55.927 "name": "Nvme1", 00:09:55.927 "trtype": "tcp", 00:09:55.927 "traddr": "10.0.0.2", 00:09:55.927 "adrfam": "ipv4", 00:09:55.927 "trsvcid": "4420", 00:09:55.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:55.927 "hdgst": false, 00:09:55.927 "ddgst": false 00:09:55.927 }, 00:09:55.927 "method": "bdev_nvme_attach_controller" 00:09:55.927 }' 00:09:55.927 [2024-11-20 08:55:21.300067] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:09:55.927 [2024-11-20 08:55:21.300131] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid559665 ] 00:09:55.927 [2024-11-20 08:55:21.391801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.927 [2024-11-20 08:55:21.449234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.927 [2024-11-20 08:55:21.449406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.927 [2024-11-20 08:55:21.449406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.187 I/O targets: 00:09:56.187 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:56.187 00:09:56.187 00:09:56.187 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.187 http://cunit.sourceforge.net/ 00:09:56.187 00:09:56.187 00:09:56.187 Suite: bdevio tests on: Nvme1n1 00:09:56.187 Test: blockdev write read block ...passed 00:09:56.447 Test: blockdev write zeroes read block ...passed 00:09:56.447 Test: blockdev write zeroes read no split ...passed 00:09:56.447 Test: blockdev write zeroes read split ...passed 00:09:56.448 Test: blockdev write zeroes read split partial ...passed 00:09:56.448 Test: blockdev reset ...[2024-11-20 08:55:21.797072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:56.448 [2024-11-20 08:55:21.797177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e9970 (9): Bad file descriptor 00:09:56.448 [2024-11-20 08:55:21.811046] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:56.448 passed 00:09:56.448 Test: blockdev write read 8 blocks ...passed 00:09:56.448 Test: blockdev write read size > 128k ...passed 00:09:56.448 Test: blockdev write read invalid size ...passed 00:09:56.448 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:56.448 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:56.448 Test: blockdev write read max offset ...passed 00:09:56.709 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:56.709 Test: blockdev writev readv 8 blocks ...passed 00:09:56.709 Test: blockdev writev readv 30 x 1block ...passed 00:09:56.709 Test: blockdev writev readv block ...passed 00:09:56.709 Test: blockdev writev readv size > 128k ...passed 00:09:56.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:56.709 Test: blockdev comparev and writev ...[2024-11-20 08:55:22.078616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.709 [2024-11-20 08:55:22.078662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:56.709 [2024-11-20 08:55:22.078679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.709 [2024-11-20 08:55:22.078688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:56.709 [2024-11-20 08:55:22.079245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.709 [2024-11-20 08:55:22.079262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:56.709 [2024-11-20 08:55:22.079276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.709 [2024-11-20 08:55:22.079285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:56.709 [2024-11-20 08:55:22.079875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.709 [2024-11-20 08:55:22.079886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:56.709 [2024-11-20 08:55:22.079900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.709 [2024-11-20 08:55:22.079908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:56.709 [2024-11-20 08:55:22.080496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.709 [2024-11-20 08:55:22.080509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:56.709 [2024-11-20 08:55:22.080523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.709 [2024-11-20 08:55:22.080531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:56.709 passed 00:09:56.709 Test: blockdev nvme passthru rw ...passed 00:09:56.709 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:55:22.164043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.709 [2024-11-20 08:55:22.164060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:56.709 [2024-11-20 08:55:22.164452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.709 [2024-11-20 08:55:22.164464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:56.709 [2024-11-20 08:55:22.164870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.709 [2024-11-20 08:55:22.164881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:56.709 [2024-11-20 08:55:22.165278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.709 [2024-11-20 08:55:22.165291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:56.709 passed 00:09:56.709 Test: blockdev nvme admin passthru ...passed 00:09:56.709 Test: blockdev copy ...passed 00:09:56.709 00:09:56.709 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.709 suites 1 1 n/a 0 0 00:09:56.709 tests 23 23 23 0 0 00:09:56.709 asserts 152 152 152 0 n/a 00:09:56.709 00:09:56.709 Elapsed time = 1.123 seconds 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.969 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.969 rmmod nvme_tcp 00:09:56.969 rmmod nvme_fabrics 00:09:56.970 rmmod nvme_keyring 00:09:56.970 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.970 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:56.970 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:56.970 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 559428 ']' 00:09:56.970 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 559428 00:09:56.970 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 559428 ']' 00:09:56.970 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 559428 00:09:56.970 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:56.970 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.970 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 559428 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 559428' 00:09:57.231 killing process with pid 559428 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 559428 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 559428 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.231 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.781 08:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.781 00:09:59.781 real 0m12.224s 00:09:59.781 user 0m13.212s 00:09:59.781 sys 0m6.269s 00:09:59.781 08:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.781 08:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.781 ************************************ 00:09:59.781 END TEST nvmf_bdevio 00:09:59.781 ************************************ 00:09:59.781 08:55:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:59.781 00:09:59.781 real 5m5.176s 00:09:59.781 user 11m45.073s 00:09:59.781 sys 1m52.155s 00:09:59.781 08:55:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.781 08:55:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.781 ************************************ 00:09:59.781 END TEST nvmf_target_core 00:09:59.781 ************************************ 00:09:59.781 08:55:24 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:59.781 08:55:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.781 08:55:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.781 08:55:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.781 ************************************ 00:09:59.781 START TEST nvmf_target_extra 00:09:59.781 ************************************ 00:09:59.781 08:55:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:59.781 * Looking for test storage... 00:09:59.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.781 --rc genhtml_branch_coverage=1 00:09:59.781 --rc genhtml_function_coverage=1 00:09:59.781 --rc genhtml_legend=1 00:09:59.781 --rc geninfo_all_blocks=1 00:09:59.781 --rc geninfo_unexecuted_blocks=1 00:09:59.781 00:09:59.781 ' 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.781 --rc genhtml_branch_coverage=1 00:09:59.781 --rc genhtml_function_coverage=1 00:09:59.781 --rc genhtml_legend=1 00:09:59.781 --rc geninfo_all_blocks=1 00:09:59.781 --rc geninfo_unexecuted_blocks=1 00:09:59.781 00:09:59.781 ' 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.781 --rc genhtml_branch_coverage=1 00:09:59.781 --rc genhtml_function_coverage=1 00:09:59.781 --rc genhtml_legend=1 00:09:59.781 --rc geninfo_all_blocks=1 00:09:59.781 --rc geninfo_unexecuted_blocks=1 00:09:59.781 00:09:59.781 ' 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.781 --rc genhtml_branch_coverage=1 00:09:59.781 --rc genhtml_function_coverage=1 00:09:59.781 --rc genhtml_legend=1 00:09:59.781 --rc geninfo_all_blocks=1 00:09:59.781 --rc geninfo_unexecuted_blocks=1 00:09:59.781 00:09:59.781 ' 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.781 08:55:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:59.782 ************************************ 00:09:59.782 START TEST nvmf_example 00:09:59.782 ************************************ 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:59.782 * Looking for test storage... 00:09:59.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.782 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.044 --rc genhtml_branch_coverage=1 00:10:00.044 --rc genhtml_function_coverage=1 00:10:00.044 --rc genhtml_legend=1 00:10:00.044 --rc geninfo_all_blocks=1 00:10:00.044 --rc geninfo_unexecuted_blocks=1 00:10:00.044 00:10:00.044 ' 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.044 --rc genhtml_branch_coverage=1 00:10:00.044 --rc genhtml_function_coverage=1 00:10:00.044 --rc genhtml_legend=1 00:10:00.044 --rc geninfo_all_blocks=1 00:10:00.044 --rc geninfo_unexecuted_blocks=1 00:10:00.044 00:10:00.044 ' 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.044 --rc genhtml_branch_coverage=1 00:10:00.044 --rc genhtml_function_coverage=1 00:10:00.044 --rc genhtml_legend=1 00:10:00.044 --rc geninfo_all_blocks=1 00:10:00.044 --rc geninfo_unexecuted_blocks=1 00:10:00.044 00:10:00.044 ' 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.044 --rc genhtml_branch_coverage=1 00:10:00.044 --rc genhtml_function_coverage=1 00:10:00.044 --rc genhtml_legend=1 00:10:00.044 --rc geninfo_all_blocks=1 00:10:00.044 --rc geninfo_unexecuted_blocks=1 00:10:00.044 00:10:00.044 ' 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.044 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.045 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.197 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:08.198 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:08.198 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:08.198 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:08.198 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:10:08.198 00:10:08.198 --- 10.0.0.2 ping statistics --- 00:10:08.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.198 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:10:08.198 00:10:08.198 --- 10.0.0.1 ping statistics --- 00:10:08.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.198 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.198 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=564195 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 564195 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 564195 ']' 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.199 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:08.461 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:20.697 Initializing NVMe Controllers 00:10:20.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:20.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:20.697 Initialization complete. Launching workers. 00:10:20.697 ======================================================== 00:10:20.697 Latency(us) 00:10:20.697 Device Information : IOPS MiB/s Average min max 00:10:20.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18244.86 71.27 3507.53 632.87 16158.88 00:10:20.697 ======================================================== 00:10:20.697 Total : 18244.86 71.27 3507.53 632.87 16158.88 00:10:20.697 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:20.697 rmmod nvme_tcp 00:10:20.697 rmmod nvme_fabrics 00:10:20.697 rmmod nvme_keyring 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 564195 ']' 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 564195 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 564195 ']' 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 564195 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 564195 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 564195' 00:10:20.697 killing process with pid 564195 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 564195 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 564195 00:10:20.697 nvmf threads initialize successfully 00:10:20.697 bdev subsystem init successfully 00:10:20.697 created a nvmf target service 00:10:20.697 create targets's poll groups done 00:10:20.697 all subsystems of target started 00:10:20.697 nvmf target is running 00:10:20.697 all subsystems of target stopped 00:10:20.697 destroy targets's poll groups done 00:10:20.697 destroyed the nvmf target service 00:10:20.697 bdev subsystem finish successfully 00:10:20.697 nvmf threads destroy successfully 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.697 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.268 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:21.268 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:21.268 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:21.268 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:21.268 00:10:21.268 real 0m21.499s 00:10:21.268 user 0m47.007s 00:10:21.268 sys 0m7.008s 00:10:21.268 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.268 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:21.268 ************************************ 00:10:21.268 END TEST nvmf_example 00:10:21.268 ************************************ 00:10:21.268 08:55:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:21.268 08:55:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:21.268 08:55:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.268 08:55:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:21.268 ************************************ 00:10:21.268 START TEST nvmf_filesystem 00:10:21.268 ************************************ 00:10:21.268 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:21.532 * Looking for test storage... 00:10:21.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:21.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.532 --rc genhtml_branch_coverage=1 00:10:21.532 --rc genhtml_function_coverage=1 00:10:21.532 --rc genhtml_legend=1 00:10:21.532 --rc geninfo_all_blocks=1 00:10:21.532 --rc geninfo_unexecuted_blocks=1 00:10:21.532 00:10:21.532 ' 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:21.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.532 --rc genhtml_branch_coverage=1 00:10:21.532 --rc genhtml_function_coverage=1 00:10:21.532 --rc genhtml_legend=1 00:10:21.532 --rc geninfo_all_blocks=1 00:10:21.532 --rc geninfo_unexecuted_blocks=1 00:10:21.532 00:10:21.532 ' 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:21.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.532 --rc genhtml_branch_coverage=1 00:10:21.532 --rc genhtml_function_coverage=1 00:10:21.532 --rc genhtml_legend=1 00:10:21.532 --rc geninfo_all_blocks=1 00:10:21.532 --rc geninfo_unexecuted_blocks=1 00:10:21.532 00:10:21.532 ' 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:21.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.532 --rc genhtml_branch_coverage=1 00:10:21.532 --rc genhtml_function_coverage=1 00:10:21.532 --rc genhtml_legend=1 00:10:21.532 --rc geninfo_all_blocks=1 00:10:21.532 --rc geninfo_unexecuted_blocks=1 00:10:21.532 00:10:21.532 ' 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:21.532 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:21.533 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:21.533 #define SPDK_CONFIG_H 00:10:21.533 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:21.534 #define SPDK_CONFIG_APPS 1 00:10:21.534 #define SPDK_CONFIG_ARCH native 00:10:21.534 #undef SPDK_CONFIG_ASAN 00:10:21.534 #undef SPDK_CONFIG_AVAHI 00:10:21.534 #undef SPDK_CONFIG_CET 00:10:21.534 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:21.534 #define SPDK_CONFIG_COVERAGE 1 00:10:21.534 #define SPDK_CONFIG_CROSS_PREFIX 00:10:21.534 #undef SPDK_CONFIG_CRYPTO 00:10:21.534 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:21.534 #undef SPDK_CONFIG_CUSTOMOCF 00:10:21.534 #undef SPDK_CONFIG_DAOS 00:10:21.534 #define SPDK_CONFIG_DAOS_DIR 00:10:21.534 #define SPDK_CONFIG_DEBUG 1 00:10:21.534 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:21.534 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:21.534 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:21.534 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:21.534 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:21.534 #undef SPDK_CONFIG_DPDK_UADK 00:10:21.534 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:21.534 #define SPDK_CONFIG_EXAMPLES 1 00:10:21.534 #undef SPDK_CONFIG_FC 00:10:21.534 #define SPDK_CONFIG_FC_PATH 00:10:21.534 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:21.534 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:21.534 #define SPDK_CONFIG_FSDEV 1 00:10:21.534 #undef SPDK_CONFIG_FUSE 00:10:21.534 #undef SPDK_CONFIG_FUZZER 00:10:21.534 #define SPDK_CONFIG_FUZZER_LIB 00:10:21.534 #undef SPDK_CONFIG_GOLANG 00:10:21.534 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:21.534 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:21.534 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:21.534 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:21.534 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:21.534 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:21.534 #undef SPDK_CONFIG_HAVE_LZ4 00:10:21.534 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:21.534 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:21.534 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:21.534 #define SPDK_CONFIG_IDXD 1 00:10:21.534 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:21.534 #undef SPDK_CONFIG_IPSEC_MB 00:10:21.534 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:21.534 #define SPDK_CONFIG_ISAL 1 00:10:21.534 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:21.534 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:21.534 #define SPDK_CONFIG_LIBDIR 00:10:21.534 #undef SPDK_CONFIG_LTO 00:10:21.534 #define SPDK_CONFIG_MAX_LCORES 128 00:10:21.534 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:21.534 #define SPDK_CONFIG_NVME_CUSE 1 00:10:21.534 #undef SPDK_CONFIG_OCF 00:10:21.534 #define SPDK_CONFIG_OCF_PATH 00:10:21.534 #define SPDK_CONFIG_OPENSSL_PATH 00:10:21.534 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:21.534 #define SPDK_CONFIG_PGO_DIR 00:10:21.534 #undef SPDK_CONFIG_PGO_USE 00:10:21.534 #define SPDK_CONFIG_PREFIX /usr/local 00:10:21.534 #undef SPDK_CONFIG_RAID5F 00:10:21.534 #undef SPDK_CONFIG_RBD 00:10:21.534 #define SPDK_CONFIG_RDMA 1 00:10:21.534 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:21.534 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:21.534 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:21.534 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:21.534 #define SPDK_CONFIG_SHARED 1 00:10:21.534 #undef SPDK_CONFIG_SMA 00:10:21.534 #define SPDK_CONFIG_TESTS 1 00:10:21.534 #undef SPDK_CONFIG_TSAN 00:10:21.534 #define SPDK_CONFIG_UBLK 1 00:10:21.534 #define SPDK_CONFIG_UBSAN 1 00:10:21.534 #undef SPDK_CONFIG_UNIT_TESTS 00:10:21.534 #undef SPDK_CONFIG_URING 00:10:21.534 #define SPDK_CONFIG_URING_PATH 00:10:21.534 #undef SPDK_CONFIG_URING_ZNS 00:10:21.534 #undef SPDK_CONFIG_USDT 00:10:21.534 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:21.534 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:21.534 #define SPDK_CONFIG_VFIO_USER 1 00:10:21.534 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:21.534 #define SPDK_CONFIG_VHOST 1 00:10:21.534 #define SPDK_CONFIG_VIRTIO 1 00:10:21.534 #undef SPDK_CONFIG_VTUNE 00:10:21.534 #define SPDK_CONFIG_VTUNE_DIR 00:10:21.534 #define SPDK_CONFIG_WERROR 1 00:10:21.534 #define SPDK_CONFIG_WPDK_DIR 00:10:21.534 #undef SPDK_CONFIG_XNVME 00:10:21.534 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:21.534 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:21.534 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.534 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.534 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.534 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.534 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.534 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:21.534 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:21.535 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:21.536 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:21.536 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:21.536 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:21.536 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:21.536 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:21.536 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:21.799 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 566983 ]] 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 566983 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.43Cgli 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.43Cgli/tests/target /tmp/spdk.43Cgli 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118611791872 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10744717312 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677830656 00:10:21.800 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=425984 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:21.801 * Looking for test storage... 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118611791872 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=12959309824 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:21.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.801 --rc genhtml_branch_coverage=1 00:10:21.801 --rc genhtml_function_coverage=1 00:10:21.801 --rc genhtml_legend=1 00:10:21.801 --rc geninfo_all_blocks=1 00:10:21.801 --rc geninfo_unexecuted_blocks=1 00:10:21.801 00:10:21.801 ' 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:21.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.801 --rc genhtml_branch_coverage=1 00:10:21.801 --rc genhtml_function_coverage=1 00:10:21.801 --rc genhtml_legend=1 00:10:21.801 --rc geninfo_all_blocks=1 00:10:21.801 --rc geninfo_unexecuted_blocks=1 00:10:21.801 00:10:21.801 ' 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:21.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.801 --rc genhtml_branch_coverage=1 00:10:21.801 --rc genhtml_function_coverage=1 00:10:21.801 --rc genhtml_legend=1 00:10:21.801 --rc geninfo_all_blocks=1 00:10:21.801 --rc geninfo_unexecuted_blocks=1 00:10:21.801 00:10:21.801 ' 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:21.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.801 --rc genhtml_branch_coverage=1 00:10:21.801 --rc genhtml_function_coverage=1 00:10:21.801 --rc genhtml_legend=1 00:10:21.801 --rc geninfo_all_blocks=1 00:10:21.801 --rc geninfo_unexecuted_blocks=1 00:10:21.801 00:10:21.801 ' 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.801 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:21.802 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:29.942 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:29.942 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:29.942 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:29.942 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.942 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.737 ms 00:10:29.943 00:10:29.943 --- 10.0.0.2 ping statistics --- 00:10:29.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.943 rtt min/avg/max/mdev = 0.737/0.737/0.737/0.000 ms 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:10:29.943 00:10:29.943 --- 10.0.0.1 ping statistics --- 00:10:29.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.943 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.943 ************************************ 00:10:29.943 START TEST nvmf_filesystem_no_in_capsule 00:10:29.943 ************************************ 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=570941 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 570941 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 570941 ']' 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.943 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.943 [2024-11-20 08:55:54.995777] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:10:29.943 [2024-11-20 08:55:54.995840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.943 [2024-11-20 08:55:55.097293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.943 [2024-11-20 08:55:55.151109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.943 [2024-11-20 08:55:55.151174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.943 [2024-11-20 08:55:55.151184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.943 [2024-11-20 08:55:55.151191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.943 [2024-11-20 08:55:55.151198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.943 [2024-11-20 08:55:55.153637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.943 [2024-11-20 08:55:55.153800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.943 [2024-11-20 08:55:55.153965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.943 [2024-11-20 08:55:55.153966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.514 [2024-11-20 08:55:55.870313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.514 Malloc1 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.514 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.514 [2024-11-20 08:55:56.021520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.514 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.775 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.775 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:30.775 { 00:10:30.775 "name": "Malloc1", 00:10:30.775 "aliases": [ 00:10:30.775 "c3113203-f8c0-4c69-9b1d-33b1c9258786" 00:10:30.775 ], 00:10:30.775 "product_name": "Malloc disk", 00:10:30.775 "block_size": 512, 00:10:30.775 "num_blocks": 1048576, 00:10:30.775 "uuid": "c3113203-f8c0-4c69-9b1d-33b1c9258786", 00:10:30.775 "assigned_rate_limits": { 00:10:30.775 "rw_ios_per_sec": 0, 00:10:30.775 "rw_mbytes_per_sec": 0, 00:10:30.775 "r_mbytes_per_sec": 0, 00:10:30.775 "w_mbytes_per_sec": 0 00:10:30.775 }, 00:10:30.775 "claimed": true, 00:10:30.775 "claim_type": "exclusive_write", 00:10:30.775 "zoned": false, 00:10:30.775 "supported_io_types": { 00:10:30.775 "read": true, 00:10:30.775 "write": true, 00:10:30.775 "unmap": true, 00:10:30.775 "flush": true, 00:10:30.775 "reset": true, 00:10:30.775 "nvme_admin": false, 00:10:30.775 "nvme_io": false, 00:10:30.775 "nvme_io_md": false, 00:10:30.775 "write_zeroes": true, 00:10:30.775 "zcopy": true, 00:10:30.775 "get_zone_info": false, 00:10:30.775 "zone_management": false, 00:10:30.775 "zone_append": false, 00:10:30.775 "compare": false, 00:10:30.775 "compare_and_write": false, 00:10:30.775 "abort": true, 00:10:30.775 "seek_hole": false, 00:10:30.775 "seek_data": false, 00:10:30.775 "copy": true, 00:10:30.775 "nvme_iov_md": false 00:10:30.775 }, 00:10:30.775 "memory_domains": [ 00:10:30.775 { 00:10:30.775 "dma_device_id": "system", 00:10:30.775 "dma_device_type": 1 00:10:30.775 }, 00:10:30.775 { 00:10:30.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.775 "dma_device_type": 2 00:10:30.775 } 00:10:30.775 ], 00:10:30.775 "driver_specific": {} 00:10:30.775 } 00:10:30.775 ]' 00:10:30.775 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:30.775 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:30.775 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:30.775 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:30.775 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:30.775 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:30.775 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:30.775 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.325 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:32.325 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:32.325 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.325 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:32.325 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:34.238 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:34.500 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:34.500 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:35.070 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:36.012 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:36.012 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:36.012 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:36.012 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.012 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.273 ************************************ 00:10:36.273 START TEST filesystem_ext4 00:10:36.273 ************************************ 00:10:36.273 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:36.273 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:36.273 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.273 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:36.273 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:36.273 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:36.273 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:36.273 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:36.274 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:36.274 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:36.274 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:36.274 mke2fs 1.47.0 (5-Feb-2023) 00:10:36.274 Discarding device blocks: 0/522240 done 00:10:36.274 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:36.274 Filesystem UUID: e7e71adf-c5d9-4ccb-a5c1-a6169a3cbb3b 00:10:36.274 Superblock backups stored on blocks: 00:10:36.274 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:36.274 00:10:36.274 Allocating group tables: 0/64 done 00:10:36.274 Writing inode tables: 0/64 done 00:10:36.533 Creating journal (8192 blocks): done 00:10:37.915 Writing superblocks and filesystem accounting information: 0/64 done 00:10:37.915 00:10:37.915 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:37.915 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:43.200 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:43.200 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 570941 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:43.201 00:10:43.201 real 0m7.113s 00:10:43.201 user 0m0.036s 00:10:43.201 sys 0m0.069s 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:43.201 ************************************ 00:10:43.201 END TEST filesystem_ext4 00:10:43.201 ************************************ 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.201 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.462 ************************************ 00:10:43.462 START TEST filesystem_btrfs 00:10:43.462 ************************************ 00:10:43.462 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:43.462 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:43.462 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.462 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:43.462 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:43.462 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:43.462 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:43.462 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:43.462 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:43.462 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:43.462 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:43.723 btrfs-progs v6.8.1 00:10:43.723 See https://btrfs.readthedocs.io for more information. 00:10:43.723 00:10:43.723 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:43.723 NOTE: several default settings have changed in version 5.15, please make sure 00:10:43.723 this does not affect your deployments: 00:10:43.723 - DUP for metadata (-m dup) 00:10:43.723 - enabled no-holes (-O no-holes) 00:10:43.723 - enabled free-space-tree (-R free-space-tree) 00:10:43.723 00:10:43.723 Label: (null) 00:10:43.723 UUID: ca5e667b-ff8f-4c85-8768-2110ad4d88e1 00:10:43.723 Node size: 16384 00:10:43.723 Sector size: 4096 (CPU page size: 4096) 00:10:43.723 Filesystem size: 510.00MiB 00:10:43.723 Block group profiles: 00:10:43.723 Data: single 8.00MiB 00:10:43.723 Metadata: DUP 32.00MiB 00:10:43.723 System: DUP 8.00MiB 00:10:43.723 SSD detected: yes 00:10:43.723 Zoned device: no 00:10:43.723 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:43.723 Checksum: crc32c 00:10:43.723 Number of devices: 1 00:10:43.723 Devices: 00:10:43.723 ID SIZE PATH 00:10:43.723 1 510.00MiB /dev/nvme0n1p1 00:10:43.723 00:10:43.723 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:43.723 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 570941 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:43.984 00:10:43.984 real 0m0.678s 00:10:43.984 user 0m0.029s 00:10:43.984 sys 0m0.119s 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:43.984 ************************************ 00:10:43.984 END TEST filesystem_btrfs 00:10:43.984 ************************************ 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.984 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.244 ************************************ 00:10:44.244 START TEST filesystem_xfs 00:10:44.244 ************************************ 00:10:44.244 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:44.244 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:44.244 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.244 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:44.244 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:44.244 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:44.244 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:44.244 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:44.244 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:44.244 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:44.244 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:44.244 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:44.244 = sectsz=512 attr=2, projid32bit=1 00:10:44.244 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:44.244 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:44.244 data = bsize=4096 blocks=130560, imaxpct=25 00:10:44.244 = sunit=0 swidth=0 blks 00:10:44.244 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:44.244 log =internal log bsize=4096 blocks=16384, version=2 00:10:44.244 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:44.244 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:45.185 Discarding blocks...Done. 00:10:45.185 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:45.185 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:47.728 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:47.728 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:47.728 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:47.728 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:47.728 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:47.728 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:47.728 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 570941 00:10:47.728 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:47.728 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:47.988 00:10:47.988 real 0m3.749s 00:10:47.988 user 0m0.028s 00:10:47.988 sys 0m0.080s 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:47.988 ************************************ 00:10:47.988 END TEST filesystem_xfs 00:10:47.988 ************************************ 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 570941 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 570941 ']' 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 570941 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.988 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 570941 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 570941' 00:10:48.249 killing process with pid 570941 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 570941 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 570941 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:48.249 00:10:48.249 real 0m18.803s 00:10:48.249 user 1m14.245s 00:10:48.249 sys 0m1.454s 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.249 ************************************ 00:10:48.249 END TEST nvmf_filesystem_no_in_capsule 00:10:48.249 ************************************ 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.249 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.509 ************************************ 00:10:48.509 START TEST nvmf_filesystem_in_capsule 00:10:48.509 ************************************ 00:10:48.509 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:48.509 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:48.509 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=574782 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 574782 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 574782 ']' 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.510 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.510 [2024-11-20 08:56:13.875919] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:10:48.510 [2024-11-20 08:56:13.875976] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.510 [2024-11-20 08:56:13.971843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.510 [2024-11-20 08:56:14.013208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.510 [2024-11-20 08:56:14.013251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.510 [2024-11-20 08:56:14.013257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.510 [2024-11-20 08:56:14.013262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.510 [2024-11-20 08:56:14.013266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.510 [2024-11-20 08:56:14.015071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.510 [2024-11-20 08:56:14.015203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.510 [2024-11-20 08:56:14.015260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.510 [2024-11-20 08:56:14.015261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.452 [2024-11-20 08:56:14.729523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.452 Malloc1 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.452 [2024-11-20 08:56:14.853853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:49.452 { 00:10:49.452 "name": "Malloc1", 00:10:49.452 "aliases": [ 00:10:49.452 "3b77de48-9403-4a8b-95e5-b5f4e34c99c8" 00:10:49.452 ], 00:10:49.452 "product_name": "Malloc disk", 00:10:49.452 "block_size": 512, 00:10:49.452 "num_blocks": 1048576, 00:10:49.452 "uuid": "3b77de48-9403-4a8b-95e5-b5f4e34c99c8", 00:10:49.452 "assigned_rate_limits": { 00:10:49.452 "rw_ios_per_sec": 0, 00:10:49.452 "rw_mbytes_per_sec": 0, 00:10:49.452 "r_mbytes_per_sec": 0, 00:10:49.452 "w_mbytes_per_sec": 0 00:10:49.452 }, 00:10:49.452 "claimed": true, 00:10:49.452 "claim_type": "exclusive_write", 00:10:49.452 "zoned": false, 00:10:49.452 "supported_io_types": { 00:10:49.452 "read": true, 00:10:49.452 "write": true, 00:10:49.452 "unmap": true, 00:10:49.452 "flush": true, 00:10:49.452 "reset": true, 00:10:49.452 "nvme_admin": false, 00:10:49.452 "nvme_io": false, 00:10:49.452 "nvme_io_md": false, 00:10:49.452 "write_zeroes": true, 00:10:49.452 "zcopy": true, 00:10:49.452 "get_zone_info": false, 00:10:49.452 "zone_management": false, 00:10:49.452 "zone_append": false, 00:10:49.452 "compare": false, 00:10:49.452 "compare_and_write": false, 00:10:49.452 "abort": true, 00:10:49.452 "seek_hole": false, 00:10:49.452 "seek_data": false, 00:10:49.452 "copy": true, 00:10:49.452 "nvme_iov_md": false 00:10:49.452 }, 00:10:49.452 "memory_domains": [ 00:10:49.452 { 00:10:49.452 "dma_device_id": "system", 00:10:49.452 "dma_device_type": 1 00:10:49.452 }, 00:10:49.452 { 00:10:49.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.452 "dma_device_type": 2 00:10:49.452 } 00:10:49.452 ], 00:10:49.452 "driver_specific": {} 00:10:49.452 } 00:10:49.452 ]' 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:49.452 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:49.713 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:49.713 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:49.713 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:49.713 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:49.713 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.096 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:51.096 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:51.096 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.096 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:51.096 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:53.642 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:53.642 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.026 ************************************ 00:10:55.026 START TEST filesystem_in_capsule_ext4 00:10:55.026 ************************************ 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:55.026 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:55.026 mke2fs 1.47.0 (5-Feb-2023) 00:10:55.026 Discarding device blocks: 0/522240 done 00:10:55.026 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:55.026 Filesystem UUID: 4569feba-da8c-42b6-8b3d-560934f1dfa7 00:10:55.026 Superblock backups stored on blocks: 00:10:55.026 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:55.026 00:10:55.026 Allocating group tables: 0/64 done 00:10:55.026 Writing inode tables: 0/64 done 00:10:55.026 Creating journal (8192 blocks): done 00:10:57.243 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:10:57.243 00:10:57.243 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:57.243 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 574782 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.821 00:11:03.821 real 0m8.570s 00:11:03.821 user 0m0.024s 00:11:03.821 sys 0m0.082s 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:03.821 ************************************ 00:11:03.821 END TEST filesystem_in_capsule_ext4 00:11:03.821 ************************************ 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.821 ************************************ 00:11:03.821 START TEST filesystem_in_capsule_btrfs 00:11:03.821 ************************************ 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:03.821 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:03.821 btrfs-progs v6.8.1 00:11:03.821 See https://btrfs.readthedocs.io for more information. 00:11:03.821 00:11:03.821 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:03.821 NOTE: several default settings have changed in version 5.15, please make sure 00:11:03.821 this does not affect your deployments: 00:11:03.821 - DUP for metadata (-m dup) 00:11:03.821 - enabled no-holes (-O no-holes) 00:11:03.821 - enabled free-space-tree (-R free-space-tree) 00:11:03.821 00:11:03.821 Label: (null) 00:11:03.821 UUID: dd203da1-07cf-407b-8fb1-080e0341d23b 00:11:03.821 Node size: 16384 00:11:03.821 Sector size: 4096 (CPU page size: 4096) 00:11:03.821 Filesystem size: 510.00MiB 00:11:03.821 Block group profiles: 00:11:03.821 Data: single 8.00MiB 00:11:03.821 Metadata: DUP 32.00MiB 00:11:03.821 System: DUP 8.00MiB 00:11:03.821 SSD detected: yes 00:11:03.821 Zoned device: no 00:11:03.821 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:03.821 Checksum: crc32c 00:11:03.821 Number of devices: 1 00:11:03.821 Devices: 00:11:03.821 ID SIZE PATH 00:11:03.821 1 510.00MiB /dev/nvme0n1p1 00:11:03.821 00:11:03.821 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:03.821 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.081 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.081 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:04.081 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.082 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:04.082 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:04.082 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.082 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 574782 00:11:04.082 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.082 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:04.342 00:11:04.342 real 0m0.822s 00:11:04.342 user 0m0.033s 00:11:04.342 sys 0m0.118s 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:04.342 ************************************ 00:11:04.342 END TEST filesystem_in_capsule_btrfs 00:11:04.342 ************************************ 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.342 ************************************ 00:11:04.342 START TEST filesystem_in_capsule_xfs 00:11:04.342 ************************************ 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:04.342 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:04.342 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:04.342 = sectsz=512 attr=2, projid32bit=1 00:11:04.342 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:04.342 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:04.342 data = bsize=4096 blocks=130560, imaxpct=25 00:11:04.342 = sunit=0 swidth=0 blks 00:11:04.342 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:04.342 log =internal log bsize=4096 blocks=16384, version=2 00:11:04.342 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:04.342 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:05.285 Discarding blocks...Done. 00:11:05.285 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:05.285 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.840 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.840 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:07.840 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.840 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:07.840 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:07.840 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.840 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 574782 00:11:07.840 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.840 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.840 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.840 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.840 00:11:07.840 real 0m3.243s 00:11:07.841 user 0m0.028s 00:11:07.841 sys 0m0.077s 00:11:07.841 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.841 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:07.841 ************************************ 00:11:07.841 END TEST filesystem_in_capsule_xfs 00:11:07.841 ************************************ 00:11:07.841 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 574782 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 574782 ']' 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 574782 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 574782 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 574782' 00:11:07.841 killing process with pid 574782 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 574782 00:11:07.841 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 574782 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:08.102 00:11:08.102 real 0m19.640s 00:11:08.102 user 1m17.758s 00:11:08.102 sys 0m1.349s 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.102 ************************************ 00:11:08.102 END TEST nvmf_filesystem_in_capsule 00:11:08.102 ************************************ 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.102 rmmod nvme_tcp 00:11:08.102 rmmod nvme_fabrics 00:11:08.102 rmmod nvme_keyring 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.102 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.648 00:11:10.648 real 0m48.892s 00:11:10.648 user 2m34.374s 00:11:10.648 sys 0m8.828s 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:10.648 ************************************ 00:11:10.648 END TEST nvmf_filesystem 00:11:10.648 ************************************ 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:10.648 ************************************ 00:11:10.648 START TEST nvmf_target_discovery 00:11:10.648 ************************************ 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:10.648 * Looking for test storage... 00:11:10.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.648 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:10.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.649 --rc genhtml_branch_coverage=1 00:11:10.649 --rc genhtml_function_coverage=1 00:11:10.649 --rc genhtml_legend=1 00:11:10.649 --rc geninfo_all_blocks=1 00:11:10.649 --rc geninfo_unexecuted_blocks=1 00:11:10.649 00:11:10.649 ' 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:10.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.649 --rc genhtml_branch_coverage=1 00:11:10.649 --rc genhtml_function_coverage=1 00:11:10.649 --rc genhtml_legend=1 00:11:10.649 --rc geninfo_all_blocks=1 00:11:10.649 --rc geninfo_unexecuted_blocks=1 00:11:10.649 00:11:10.649 ' 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:10.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.649 --rc genhtml_branch_coverage=1 00:11:10.649 --rc genhtml_function_coverage=1 00:11:10.649 --rc genhtml_legend=1 00:11:10.649 --rc geninfo_all_blocks=1 00:11:10.649 --rc geninfo_unexecuted_blocks=1 00:11:10.649 00:11:10.649 ' 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:10.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.649 --rc genhtml_branch_coverage=1 00:11:10.649 --rc genhtml_function_coverage=1 00:11:10.649 --rc genhtml_legend=1 00:11:10.649 --rc geninfo_all_blocks=1 00:11:10.649 --rc geninfo_unexecuted_blocks=1 00:11:10.649 00:11:10.649 ' 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.649 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:10.650 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:18.790 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:18.790 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:18.790 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:18.790 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:18.790 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:11:18.791 00:11:18.791 --- 10.0.0.2 ping statistics --- 00:11:18.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.791 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:11:18.791 00:11:18.791 --- 10.0.0.1 ping statistics --- 00:11:18.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.791 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=583035 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 583035 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 583035 ']' 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.791 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.791 [2024-11-20 08:56:43.553293] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:11:18.791 [2024-11-20 08:56:43.553358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.791 [2024-11-20 08:56:43.654854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.791 [2024-11-20 08:56:43.707719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.791 [2024-11-20 08:56:43.707775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.791 [2024-11-20 08:56:43.707785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.791 [2024-11-20 08:56:43.707792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.791 [2024-11-20 08:56:43.707798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.791 [2024-11-20 08:56:43.709857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.791 [2024-11-20 08:56:43.710020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.791 [2024-11-20 08:56:43.710206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.791 [2024-11-20 08:56:43.710256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.053 [2024-11-20 08:56:44.434324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.053 Null1 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.053 [2024-11-20 08:56:44.494872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.053 Null2 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.053 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.054 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:19.054 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.054 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.054 Null3 00:11:19.054 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.054 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:19.054 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.054 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.054 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.054 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:19.054 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.054 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.316 Null4 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.316 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:19.599 00:11:19.599 Discovery Log Number of Records 6, Generation counter 6 00:11:19.599 =====Discovery Log Entry 0====== 00:11:19.599 trtype: tcp 00:11:19.599 adrfam: ipv4 00:11:19.599 subtype: current discovery subsystem 00:11:19.599 treq: not required 00:11:19.599 portid: 0 00:11:19.599 trsvcid: 4420 00:11:19.599 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:19.599 traddr: 10.0.0.2 00:11:19.599 eflags: explicit discovery connections, duplicate discovery information 00:11:19.599 sectype: none 00:11:19.599 =====Discovery Log Entry 1====== 00:11:19.599 trtype: tcp 00:11:19.599 adrfam: ipv4 00:11:19.599 subtype: nvme subsystem 00:11:19.599 treq: not required 00:11:19.599 portid: 0 00:11:19.599 trsvcid: 4420 00:11:19.599 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:19.599 traddr: 10.0.0.2 00:11:19.599 eflags: none 00:11:19.599 sectype: none 00:11:19.599 =====Discovery Log Entry 2====== 00:11:19.599 trtype: tcp 00:11:19.599 adrfam: ipv4 00:11:19.599 subtype: nvme subsystem 00:11:19.599 treq: not required 00:11:19.599 portid: 0 00:11:19.599 trsvcid: 4420 00:11:19.599 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:19.599 traddr: 10.0.0.2 00:11:19.599 eflags: none 00:11:19.599 sectype: none 00:11:19.599 =====Discovery Log Entry 3====== 00:11:19.599 trtype: tcp 00:11:19.599 adrfam: ipv4 00:11:19.599 subtype: nvme subsystem 00:11:19.599 treq: not required 00:11:19.599 portid: 0 00:11:19.599 trsvcid: 4420 00:11:19.599 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:19.599 traddr: 10.0.0.2 00:11:19.599 eflags: none 00:11:19.599 sectype: none 00:11:19.599 =====Discovery Log Entry 4====== 00:11:19.599 trtype: tcp 00:11:19.599 adrfam: ipv4 00:11:19.599 subtype: nvme subsystem 00:11:19.599 treq: not required 00:11:19.599 portid: 0 00:11:19.599 trsvcid: 4420 00:11:19.599 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:19.599 traddr: 10.0.0.2 00:11:19.599 eflags: none 00:11:19.599 sectype: none 00:11:19.599 =====Discovery Log Entry 5====== 00:11:19.599 trtype: tcp 00:11:19.599 adrfam: ipv4 00:11:19.599 subtype: discovery subsystem referral 00:11:19.599 treq: not required 00:11:19.599 portid: 0 00:11:19.599 trsvcid: 4430 00:11:19.599 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:19.599 traddr: 10.0.0.2 00:11:19.599 eflags: none 00:11:19.599 sectype: none 00:11:19.599 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:19.599 Perform nvmf subsystem discovery via RPC 00:11:19.599 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:19.599 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.599 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.599 [ 00:11:19.599 { 00:11:19.599 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:19.599 "subtype": "Discovery", 00:11:19.599 "listen_addresses": [ 00:11:19.599 { 00:11:19.600 "trtype": "TCP", 00:11:19.600 "adrfam": "IPv4", 00:11:19.600 "traddr": "10.0.0.2", 00:11:19.600 "trsvcid": "4420" 00:11:19.600 } 00:11:19.600 ], 00:11:19.600 "allow_any_host": true, 00:11:19.600 "hosts": [] 00:11:19.600 }, 00:11:19.600 { 00:11:19.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.600 "subtype": "NVMe", 00:11:19.600 "listen_addresses": [ 00:11:19.600 { 00:11:19.600 "trtype": "TCP", 00:11:19.600 "adrfam": "IPv4", 00:11:19.600 "traddr": "10.0.0.2", 00:11:19.600 "trsvcid": "4420" 00:11:19.600 } 00:11:19.600 ], 00:11:19.600 "allow_any_host": true, 00:11:19.600 "hosts": [], 00:11:19.600 "serial_number": "SPDK00000000000001", 00:11:19.600 "model_number": "SPDK bdev Controller", 00:11:19.600 "max_namespaces": 32, 00:11:19.600 "min_cntlid": 1, 00:11:19.600 "max_cntlid": 65519, 00:11:19.600 "namespaces": [ 00:11:19.600 { 00:11:19.600 "nsid": 1, 00:11:19.600 "bdev_name": "Null1", 00:11:19.600 "name": "Null1", 00:11:19.600 "nguid": "F44D64E2E810400D9F0164C4EC545C11", 00:11:19.600 "uuid": "f44d64e2-e810-400d-9f01-64c4ec545c11" 00:11:19.600 } 00:11:19.600 ] 00:11:19.600 }, 00:11:19.600 { 00:11:19.600 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:19.600 "subtype": "NVMe", 00:11:19.600 "listen_addresses": [ 00:11:19.600 { 00:11:19.600 "trtype": "TCP", 00:11:19.600 "adrfam": "IPv4", 00:11:19.600 "traddr": "10.0.0.2", 00:11:19.600 "trsvcid": "4420" 00:11:19.600 } 00:11:19.600 ], 00:11:19.600 "allow_any_host": true, 00:11:19.600 "hosts": [], 00:11:19.600 "serial_number": "SPDK00000000000002", 00:11:19.600 "model_number": "SPDK bdev Controller", 00:11:19.600 "max_namespaces": 32, 00:11:19.600 "min_cntlid": 1, 00:11:19.600 "max_cntlid": 65519, 00:11:19.600 "namespaces": [ 00:11:19.600 { 00:11:19.600 "nsid": 1, 00:11:19.600 "bdev_name": "Null2", 00:11:19.600 "name": "Null2", 00:11:19.600 "nguid": "5A75A1EDBE3B44E681B987AF68742C4C", 00:11:19.600 "uuid": "5a75a1ed-be3b-44e6-81b9-87af68742c4c" 00:11:19.600 } 00:11:19.600 ] 00:11:19.600 }, 00:11:19.600 { 00:11:19.600 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:19.600 "subtype": "NVMe", 00:11:19.600 "listen_addresses": [ 00:11:19.600 { 00:11:19.600 "trtype": "TCP", 00:11:19.600 "adrfam": "IPv4", 00:11:19.600 "traddr": "10.0.0.2", 00:11:19.600 "trsvcid": "4420" 00:11:19.600 } 00:11:19.600 ], 00:11:19.600 "allow_any_host": true, 00:11:19.600 "hosts": [], 00:11:19.600 "serial_number": "SPDK00000000000003", 00:11:19.600 "model_number": "SPDK bdev Controller", 00:11:19.600 "max_namespaces": 32, 00:11:19.600 "min_cntlid": 1, 00:11:19.600 "max_cntlid": 65519, 00:11:19.600 "namespaces": [ 00:11:19.600 { 00:11:19.600 "nsid": 1, 00:11:19.600 "bdev_name": "Null3", 00:11:19.600 "name": "Null3", 00:11:19.600 "nguid": "7438405E60F64D868F33BF8959FCE6F0", 00:11:19.600 "uuid": "7438405e-60f6-4d86-8f33-bf8959fce6f0" 00:11:19.600 } 00:11:19.600 ] 00:11:19.600 }, 00:11:19.600 { 00:11:19.600 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:19.600 "subtype": "NVMe", 00:11:19.600 "listen_addresses": [ 00:11:19.600 { 00:11:19.600 "trtype": "TCP", 00:11:19.600 "adrfam": "IPv4", 00:11:19.600 "traddr": "10.0.0.2", 00:11:19.600 "trsvcid": "4420" 00:11:19.600 } 00:11:19.600 ], 00:11:19.600 "allow_any_host": true, 00:11:19.600 "hosts": [], 00:11:19.600 "serial_number": "SPDK00000000000004", 00:11:19.600 "model_number": "SPDK bdev Controller", 00:11:19.600 "max_namespaces": 32, 00:11:19.600 "min_cntlid": 1, 00:11:19.600 "max_cntlid": 65519, 00:11:19.600 "namespaces": [ 00:11:19.600 { 00:11:19.600 "nsid": 1, 00:11:19.600 "bdev_name": "Null4", 00:11:19.600 "name": "Null4", 00:11:19.600 "nguid": "E77FA16278B844D9AAC4CDFAA5FF4045", 00:11:19.600 "uuid": "e77fa162-78b8-44d9-aac4-cdfaa5ff4045" 00:11:19.600 } 00:11:19.600 ] 00:11:19.600 } 00:11:19.600 ] 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.600 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.600 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.600 rmmod nvme_tcp 00:11:19.600 rmmod nvme_fabrics 00:11:19.862 rmmod nvme_keyring 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 583035 ']' 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 583035 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 583035 ']' 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 583035 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 583035 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 583035' 00:11:19.862 killing process with pid 583035 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 583035 00:11:19.862 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 583035 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.124 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.037 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.037 00:11:22.037 real 0m11.746s 00:11:22.037 user 0m9.129s 00:11:22.037 sys 0m6.109s 00:11:22.037 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.037 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.037 ************************************ 00:11:22.037 END TEST nvmf_target_discovery 00:11:22.037 ************************************ 00:11:22.037 08:56:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:22.037 08:56:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.037 08:56:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.037 08:56:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.298 ************************************ 00:11:22.298 START TEST nvmf_referrals 00:11:22.298 ************************************ 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:22.298 * Looking for test storage... 00:11:22.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.298 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:22.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.299 --rc genhtml_branch_coverage=1 00:11:22.299 --rc genhtml_function_coverage=1 00:11:22.299 --rc genhtml_legend=1 00:11:22.299 --rc geninfo_all_blocks=1 00:11:22.299 --rc geninfo_unexecuted_blocks=1 00:11:22.299 00:11:22.299 ' 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:22.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.299 --rc genhtml_branch_coverage=1 00:11:22.299 --rc genhtml_function_coverage=1 00:11:22.299 --rc genhtml_legend=1 00:11:22.299 --rc geninfo_all_blocks=1 00:11:22.299 --rc geninfo_unexecuted_blocks=1 00:11:22.299 00:11:22.299 ' 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:22.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.299 --rc genhtml_branch_coverage=1 00:11:22.299 --rc genhtml_function_coverage=1 00:11:22.299 --rc genhtml_legend=1 00:11:22.299 --rc geninfo_all_blocks=1 00:11:22.299 --rc geninfo_unexecuted_blocks=1 00:11:22.299 00:11:22.299 ' 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:22.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.299 --rc genhtml_branch_coverage=1 00:11:22.299 --rc genhtml_function_coverage=1 00:11:22.299 --rc genhtml_legend=1 00:11:22.299 --rc geninfo_all_blocks=1 00:11:22.299 --rc geninfo_unexecuted_blocks=1 00:11:22.299 00:11:22.299 ' 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.299 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.300 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.300 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.300 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.300 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:22.300 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:22.300 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.300 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:30.441 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:30.441 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:30.441 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.441 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.442 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.442 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:30.442 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:11:30.442 00:11:30.442 --- 10.0.0.2 ping statistics --- 00:11:30.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.442 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:11:30.442 00:11:30.442 --- 10.0.0.1 ping statistics --- 00:11:30.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.442 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=587483 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 587483 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 587483 ']' 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.442 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.442 [2024-11-20 08:56:55.404327] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:11:30.442 [2024-11-20 08:56:55.404394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.442 [2024-11-20 08:56:55.508146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.442 [2024-11-20 08:56:55.564493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.442 [2024-11-20 08:56:55.564550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.442 [2024-11-20 08:56:55.564559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.442 [2024-11-20 08:56:55.564566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.442 [2024-11-20 08:56:55.564573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.442 [2024-11-20 08:56:55.566631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.442 [2024-11-20 08:56:55.566792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.442 [2024-11-20 08:56:55.566821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.442 [2024-11-20 08:56:55.566838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.703 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.703 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:30.703 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:30.703 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.703 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.964 [2024-11-20 08:56:56.271909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.964 [2024-11-20 08:56:56.288288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:30.964 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.225 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.486 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.748 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:32.008 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:32.008 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:32.008 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:32.008 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:32.008 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.008 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:32.269 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:32.529 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:32.529 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:32.529 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:32.529 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:32.529 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:32.529 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.529 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:32.529 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:32.529 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:32.529 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:32.529 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:32.788 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.048 rmmod nvme_tcp 00:11:33.048 rmmod nvme_fabrics 00:11:33.048 rmmod nvme_keyring 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 587483 ']' 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 587483 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 587483 ']' 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 587483 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.048 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587483 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587483' 00:11:33.308 killing process with pid 587483 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 587483 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 587483 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.308 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.850 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.850 00:11:35.850 real 0m13.244s 00:11:35.850 user 0m15.809s 00:11:35.850 sys 0m6.570s 00:11:35.850 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.850 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.850 ************************************ 00:11:35.850 END TEST nvmf_referrals 00:11:35.850 ************************************ 00:11:35.850 08:57:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:35.850 08:57:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.850 08:57:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.850 08:57:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.850 ************************************ 00:11:35.850 START TEST nvmf_connect_disconnect 00:11:35.850 ************************************ 00:11:35.850 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:35.850 * Looking for test storage... 00:11:35.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.850 --rc genhtml_branch_coverage=1 00:11:35.850 --rc genhtml_function_coverage=1 00:11:35.850 --rc genhtml_legend=1 00:11:35.850 --rc geninfo_all_blocks=1 00:11:35.850 --rc geninfo_unexecuted_blocks=1 00:11:35.850 00:11:35.850 ' 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.850 --rc genhtml_branch_coverage=1 00:11:35.850 --rc genhtml_function_coverage=1 00:11:35.850 --rc genhtml_legend=1 00:11:35.850 --rc geninfo_all_blocks=1 00:11:35.850 --rc geninfo_unexecuted_blocks=1 00:11:35.850 00:11:35.850 ' 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.850 --rc genhtml_branch_coverage=1 00:11:35.850 --rc genhtml_function_coverage=1 00:11:35.850 --rc genhtml_legend=1 00:11:35.850 --rc geninfo_all_blocks=1 00:11:35.850 --rc geninfo_unexecuted_blocks=1 00:11:35.850 00:11:35.850 ' 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.850 --rc genhtml_branch_coverage=1 00:11:35.850 --rc genhtml_function_coverage=1 00:11:35.850 --rc genhtml_legend=1 00:11:35.850 --rc geninfo_all_blocks=1 00:11:35.850 --rc geninfo_unexecuted_blocks=1 00:11:35.850 00:11:35.850 ' 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.850 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.851 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:43.992 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:43.992 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.992 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:43.993 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:43.993 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:11:43.993 00:11:43.993 --- 10.0.0.2 ping statistics --- 00:11:43.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.993 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:11:43.993 00:11:43.993 --- 10.0.0.1 ping statistics --- 00:11:43.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.993 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=592674 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 592674 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 592674 ']' 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.993 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.993 [2024-11-20 08:57:08.698315] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:11:43.993 [2024-11-20 08:57:08.698384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.993 [2024-11-20 08:57:08.801520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.993 [2024-11-20 08:57:08.854429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.993 [2024-11-20 08:57:08.854496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.993 [2024-11-20 08:57:08.854506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.993 [2024-11-20 08:57:08.854513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.993 [2024-11-20 08:57:08.854519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.993 [2024-11-20 08:57:08.856727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.993 [2024-11-20 08:57:08.856886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.993 [2024-11-20 08:57:08.857058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.993 [2024-11-20 08:57:08.857059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.255 [2024-11-20 08:57:09.577768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.255 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.256 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.256 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.256 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.256 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.256 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.256 [2024-11-20 08:57:09.655931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.256 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.256 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:44.256 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:44.256 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:48.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.579 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:02.579 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:02.579 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.579 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:02.579 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.580 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:02.580 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.580 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.580 rmmod nvme_tcp 00:12:02.580 rmmod nvme_fabrics 00:12:02.841 rmmod nvme_keyring 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 592674 ']' 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 592674 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 592674 ']' 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 592674 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 592674 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 592674' 00:12:02.841 killing process with pid 592674 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 592674 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 592674 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.841 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.431 00:12:05.431 real 0m29.504s 00:12:05.431 user 1m19.625s 00:12:05.431 sys 0m7.199s 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.431 ************************************ 00:12:05.431 END TEST nvmf_connect_disconnect 00:12:05.431 ************************************ 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.431 ************************************ 00:12:05.431 START TEST nvmf_multitarget 00:12:05.431 ************************************ 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:05.431 * Looking for test storage... 00:12:05.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:05.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.431 --rc genhtml_branch_coverage=1 00:12:05.431 --rc genhtml_function_coverage=1 00:12:05.431 --rc genhtml_legend=1 00:12:05.431 --rc geninfo_all_blocks=1 00:12:05.431 --rc geninfo_unexecuted_blocks=1 00:12:05.431 00:12:05.431 ' 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:05.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.431 --rc genhtml_branch_coverage=1 00:12:05.431 --rc genhtml_function_coverage=1 00:12:05.431 --rc genhtml_legend=1 00:12:05.431 --rc geninfo_all_blocks=1 00:12:05.431 --rc geninfo_unexecuted_blocks=1 00:12:05.431 00:12:05.431 ' 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:05.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.431 --rc genhtml_branch_coverage=1 00:12:05.431 --rc genhtml_function_coverage=1 00:12:05.431 --rc genhtml_legend=1 00:12:05.431 --rc geninfo_all_blocks=1 00:12:05.431 --rc geninfo_unexecuted_blocks=1 00:12:05.431 00:12:05.431 ' 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:05.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.431 --rc genhtml_branch_coverage=1 00:12:05.431 --rc genhtml_function_coverage=1 00:12:05.431 --rc genhtml_legend=1 00:12:05.431 --rc geninfo_all_blocks=1 00:12:05.431 --rc geninfo_unexecuted_blocks=1 00:12:05.431 00:12:05.431 ' 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.431 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.432 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:13.573 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.573 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:13.573 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:13.573 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:13.573 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:13.573 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:13.573 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:13.573 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:13.574 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:13.574 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:13.574 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:13.574 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:13.574 08:57:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:13.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:12:13.574 00:12:13.574 --- 10.0.0.2 ping statistics --- 00:12:13.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.574 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:13.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:12:13.574 00:12:13.574 --- 10.0.0.1 ping statistics --- 00:12:13.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.574 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:13.574 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=601149 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 601149 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 601149 ']' 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.575 08:57:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:13.575 [2024-11-20 08:57:38.285072] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:12:13.575 [2024-11-20 08:57:38.285137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.575 [2024-11-20 08:57:38.386472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.575 [2024-11-20 08:57:38.439957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.575 [2024-11-20 08:57:38.440015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.575 [2024-11-20 08:57:38.440024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.575 [2024-11-20 08:57:38.440031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.575 [2024-11-20 08:57:38.440037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.575 [2024-11-20 08:57:38.442107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.575 [2024-11-20 08:57:38.442252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.575 [2024-11-20 08:57:38.442331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.575 [2024-11-20 08:57:38.442332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.845 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.845 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:13.845 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.845 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:13.845 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:13.845 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.845 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:13.845 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:13.845 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:13.845 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:13.845 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:14.112 "nvmf_tgt_1" 00:12:14.112 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:14.112 "nvmf_tgt_2" 00:12:14.112 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:14.112 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:14.112 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:14.112 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:14.373 true 00:12:14.373 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:14.373 true 00:12:14.373 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:14.373 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:14.697 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:14.697 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:14.697 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:14.697 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.697 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:14.697 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.697 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:14.697 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.697 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.697 rmmod nvme_tcp 00:12:14.697 rmmod nvme_fabrics 00:12:14.697 rmmod nvme_keyring 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 601149 ']' 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 601149 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 601149 ']' 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 601149 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601149 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601149' 00:12:14.697 killing process with pid 601149 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 601149 00:12:14.697 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 601149 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.025 08:57:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.019 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:17.019 00:12:17.019 real 0m11.892s 00:12:17.019 user 0m10.398s 00:12:17.019 sys 0m6.175s 00:12:17.019 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.019 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:17.019 ************************************ 00:12:17.019 END TEST nvmf_multitarget 00:12:17.019 ************************************ 00:12:17.019 08:57:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:17.019 08:57:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:17.019 08:57:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.019 08:57:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.019 ************************************ 00:12:17.019 START TEST nvmf_rpc 00:12:17.019 ************************************ 00:12:17.019 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:17.281 * Looking for test storage... 00:12:17.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.281 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:17.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.281 --rc genhtml_branch_coverage=1 00:12:17.281 --rc genhtml_function_coverage=1 00:12:17.281 --rc genhtml_legend=1 00:12:17.281 --rc geninfo_all_blocks=1 00:12:17.282 --rc geninfo_unexecuted_blocks=1 00:12:17.282 00:12:17.282 ' 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:17.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.282 --rc genhtml_branch_coverage=1 00:12:17.282 --rc genhtml_function_coverage=1 00:12:17.282 --rc genhtml_legend=1 00:12:17.282 --rc geninfo_all_blocks=1 00:12:17.282 --rc geninfo_unexecuted_blocks=1 00:12:17.282 00:12:17.282 ' 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:17.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.282 --rc genhtml_branch_coverage=1 00:12:17.282 --rc genhtml_function_coverage=1 00:12:17.282 --rc genhtml_legend=1 00:12:17.282 --rc geninfo_all_blocks=1 00:12:17.282 --rc geninfo_unexecuted_blocks=1 00:12:17.282 00:12:17.282 ' 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:17.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.282 --rc genhtml_branch_coverage=1 00:12:17.282 --rc genhtml_function_coverage=1 00:12:17.282 --rc genhtml_legend=1 00:12:17.282 --rc geninfo_all_blocks=1 00:12:17.282 --rc geninfo_unexecuted_blocks=1 00:12:17.282 00:12:17.282 ' 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.282 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.427 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:25.428 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:25.428 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:25.428 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:25.428 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.428 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:12:25.428 00:12:25.428 --- 10.0.0.2 ping statistics --- 00:12:25.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.428 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:12:25.428 00:12:25.428 --- 10.0.0.1 ping statistics --- 00:12:25.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.428 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=605657 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 605657 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 605657 ']' 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.428 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.428 [2024-11-20 08:57:50.307185] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:12:25.428 [2024-11-20 08:57:50.307259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.428 [2024-11-20 08:57:50.407793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.428 [2024-11-20 08:57:50.460906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.428 [2024-11-20 08:57:50.460958] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.428 [2024-11-20 08:57:50.460966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.428 [2024-11-20 08:57:50.460974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.428 [2024-11-20 08:57:50.460980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.428 [2024-11-20 08:57:50.463116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.429 [2024-11-20 08:57:50.463250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.429 [2024-11-20 08:57:50.463622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.429 [2024-11-20 08:57:50.463626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:25.690 "tick_rate": 2400000000, 00:12:25.690 "poll_groups": [ 00:12:25.690 { 00:12:25.690 "name": "nvmf_tgt_poll_group_000", 00:12:25.690 "admin_qpairs": 0, 00:12:25.690 "io_qpairs": 0, 00:12:25.690 "current_admin_qpairs": 0, 00:12:25.690 "current_io_qpairs": 0, 00:12:25.690 "pending_bdev_io": 0, 00:12:25.690 "completed_nvme_io": 0, 00:12:25.690 "transports": [] 00:12:25.690 }, 00:12:25.690 { 00:12:25.690 "name": "nvmf_tgt_poll_group_001", 00:12:25.690 "admin_qpairs": 0, 00:12:25.690 "io_qpairs": 0, 00:12:25.690 "current_admin_qpairs": 0, 00:12:25.690 "current_io_qpairs": 0, 00:12:25.690 "pending_bdev_io": 0, 00:12:25.690 "completed_nvme_io": 0, 00:12:25.690 "transports": [] 00:12:25.690 }, 00:12:25.690 { 00:12:25.690 "name": "nvmf_tgt_poll_group_002", 00:12:25.690 "admin_qpairs": 0, 00:12:25.690 "io_qpairs": 0, 00:12:25.690 "current_admin_qpairs": 0, 00:12:25.690 "current_io_qpairs": 0, 00:12:25.690 "pending_bdev_io": 0, 00:12:25.690 "completed_nvme_io": 0, 00:12:25.690 "transports": [] 00:12:25.690 }, 00:12:25.690 { 00:12:25.690 "name": "nvmf_tgt_poll_group_003", 00:12:25.690 "admin_qpairs": 0, 00:12:25.690 "io_qpairs": 0, 00:12:25.690 "current_admin_qpairs": 0, 00:12:25.690 "current_io_qpairs": 0, 00:12:25.690 "pending_bdev_io": 0, 00:12:25.690 "completed_nvme_io": 0, 00:12:25.690 "transports": [] 00:12:25.690 } 00:12:25.690 ] 00:12:25.690 }' 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:25.690 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.952 [2024-11-20 08:57:51.304244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.952 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:25.952 "tick_rate": 2400000000, 00:12:25.952 "poll_groups": [ 00:12:25.952 { 00:12:25.953 "name": "nvmf_tgt_poll_group_000", 00:12:25.953 "admin_qpairs": 0, 00:12:25.953 "io_qpairs": 0, 00:12:25.953 "current_admin_qpairs": 0, 00:12:25.953 "current_io_qpairs": 0, 00:12:25.953 "pending_bdev_io": 0, 00:12:25.953 "completed_nvme_io": 0, 00:12:25.953 "transports": [ 00:12:25.953 { 00:12:25.953 "trtype": "TCP" 00:12:25.953 } 00:12:25.953 ] 00:12:25.953 }, 00:12:25.953 { 00:12:25.953 "name": "nvmf_tgt_poll_group_001", 00:12:25.953 "admin_qpairs": 0, 00:12:25.953 "io_qpairs": 0, 00:12:25.953 "current_admin_qpairs": 0, 00:12:25.953 "current_io_qpairs": 0, 00:12:25.953 "pending_bdev_io": 0, 00:12:25.953 "completed_nvme_io": 0, 00:12:25.953 "transports": [ 00:12:25.953 { 00:12:25.953 "trtype": "TCP" 00:12:25.953 } 00:12:25.953 ] 00:12:25.953 }, 00:12:25.953 { 00:12:25.953 "name": "nvmf_tgt_poll_group_002", 00:12:25.953 "admin_qpairs": 0, 00:12:25.953 "io_qpairs": 0, 00:12:25.953 "current_admin_qpairs": 0, 00:12:25.953 "current_io_qpairs": 0, 00:12:25.953 "pending_bdev_io": 0, 00:12:25.953 "completed_nvme_io": 0, 00:12:25.953 "transports": [ 00:12:25.953 { 00:12:25.953 "trtype": "TCP" 00:12:25.953 } 00:12:25.953 ] 00:12:25.953 }, 00:12:25.953 { 00:12:25.953 "name": "nvmf_tgt_poll_group_003", 00:12:25.953 "admin_qpairs": 0, 00:12:25.953 "io_qpairs": 0, 00:12:25.953 "current_admin_qpairs": 0, 00:12:25.953 "current_io_qpairs": 0, 00:12:25.953 "pending_bdev_io": 0, 00:12:25.953 "completed_nvme_io": 0, 00:12:25.953 "transports": [ 00:12:25.953 { 00:12:25.953 "trtype": "TCP" 00:12:25.953 } 00:12:25.953 ] 00:12:25.953 } 00:12:25.953 ] 00:12:25.953 }' 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.953 Malloc1 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.953 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 [2024-11-20 08:57:51.516851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:26.215 [2024-11-20 08:57:51.553911] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:26.215 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:26.215 could not add new controller: failed to write to nvme-fabrics device 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.215 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.602 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.603 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:27.603 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.603 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:27.603 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.150 [2024-11-20 08:57:55.299479] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:30.150 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:30.150 could not add new controller: failed to write to nvme-fabrics device 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.150 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.534 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.534 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:31.534 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.534 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:31.534 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.445 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.445 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.445 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.445 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.445 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.445 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:33.445 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.706 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.706 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:33.707 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:33.707 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.707 [2024-11-20 08:57:59.063718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.707 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.618 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.618 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:35.618 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.618 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:35.618 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.530 [2024-11-20 08:58:02.823966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.530 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.913 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.913 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:38.913 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.913 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:38.913 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.456 [2024-11-20 08:58:06.593682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.456 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.845 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.845 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:42.845 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.845 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:42.845 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:44.762 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:44.762 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:44.762 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.762 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:44.762 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.762 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:44.762 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 [2024-11-20 08:58:10.417994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.022 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.407 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.407 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:46.407 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.407 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:46.407 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:48.952 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:48.952 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:48.952 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.952 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:48.952 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.952 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:48.952 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.952 [2024-11-20 08:58:14.139986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.952 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.337 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.337 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:50.337 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.337 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:50.337 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:52.252 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:52.252 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:52.252 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.252 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:52.252 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.252 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:52.252 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.514 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 [2024-11-20 08:58:17.875500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 [2024-11-20 08:58:17.947668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 [2024-11-20 08:58:18.015877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.515 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 [2024-11-20 08:58:18.088116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 [2024-11-20 08:58:18.160364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.777 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:52.778 "tick_rate": 2400000000, 00:12:52.778 "poll_groups": [ 00:12:52.778 { 00:12:52.778 "name": "nvmf_tgt_poll_group_000", 00:12:52.778 "admin_qpairs": 0, 00:12:52.778 "io_qpairs": 224, 00:12:52.778 "current_admin_qpairs": 0, 00:12:52.778 "current_io_qpairs": 0, 00:12:52.778 "pending_bdev_io": 0, 00:12:52.778 "completed_nvme_io": 276, 00:12:52.778 "transports": [ 00:12:52.778 { 00:12:52.778 "trtype": "TCP" 00:12:52.778 } 00:12:52.778 ] 00:12:52.778 }, 00:12:52.778 { 00:12:52.778 "name": "nvmf_tgt_poll_group_001", 00:12:52.778 "admin_qpairs": 1, 00:12:52.778 "io_qpairs": 223, 00:12:52.778 "current_admin_qpairs": 0, 00:12:52.778 "current_io_qpairs": 0, 00:12:52.778 "pending_bdev_io": 0, 00:12:52.778 "completed_nvme_io": 258, 00:12:52.778 "transports": [ 00:12:52.778 { 00:12:52.778 "trtype": "TCP" 00:12:52.778 } 00:12:52.778 ] 00:12:52.778 }, 00:12:52.778 { 00:12:52.778 "name": "nvmf_tgt_poll_group_002", 00:12:52.778 "admin_qpairs": 6, 00:12:52.778 "io_qpairs": 218, 00:12:52.778 "current_admin_qpairs": 0, 00:12:52.778 "current_io_qpairs": 0, 00:12:52.778 "pending_bdev_io": 0, 00:12:52.778 "completed_nvme_io": 479, 00:12:52.778 "transports": [ 00:12:52.778 { 00:12:52.778 "trtype": "TCP" 00:12:52.778 } 00:12:52.778 ] 00:12:52.778 }, 00:12:52.778 { 00:12:52.778 "name": "nvmf_tgt_poll_group_003", 00:12:52.778 "admin_qpairs": 0, 00:12:52.778 "io_qpairs": 224, 00:12:52.778 "current_admin_qpairs": 0, 00:12:52.778 "current_io_qpairs": 0, 00:12:52.778 "pending_bdev_io": 0, 00:12:52.778 "completed_nvme_io": 226, 00:12:52.778 "transports": [ 00:12:52.778 { 00:12:52.778 "trtype": "TCP" 00:12:52.778 } 00:12:52.778 ] 00:12:52.778 } 00:12:52.778 ] 00:12:52.778 }' 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:52.778 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:53.039 rmmod nvme_tcp 00:12:53.039 rmmod nvme_fabrics 00:12:53.039 rmmod nvme_keyring 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 605657 ']' 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 605657 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 605657 ']' 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 605657 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 605657 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 605657' 00:12:53.039 killing process with pid 605657 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 605657 00:12:53.039 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 605657 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.301 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.211 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:55.211 00:12:55.211 real 0m38.224s 00:12:55.211 user 1m54.572s 00:12:55.211 sys 0m7.876s 00:12:55.211 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.211 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.211 ************************************ 00:12:55.211 END TEST nvmf_rpc 00:12:55.211 ************************************ 00:12:55.211 08:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:55.211 08:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:55.211 08:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.211 08:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.473 ************************************ 00:12:55.473 START TEST nvmf_invalid 00:12:55.473 ************************************ 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:55.473 * Looking for test storage... 00:12:55.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.473 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:55.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.473 --rc genhtml_branch_coverage=1 00:12:55.473 --rc genhtml_function_coverage=1 00:12:55.474 --rc genhtml_legend=1 00:12:55.474 --rc geninfo_all_blocks=1 00:12:55.474 --rc geninfo_unexecuted_blocks=1 00:12:55.474 00:12:55.474 ' 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:55.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.474 --rc genhtml_branch_coverage=1 00:12:55.474 --rc genhtml_function_coverage=1 00:12:55.474 --rc genhtml_legend=1 00:12:55.474 --rc geninfo_all_blocks=1 00:12:55.474 --rc geninfo_unexecuted_blocks=1 00:12:55.474 00:12:55.474 ' 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:55.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.474 --rc genhtml_branch_coverage=1 00:12:55.474 --rc genhtml_function_coverage=1 00:12:55.474 --rc genhtml_legend=1 00:12:55.474 --rc geninfo_all_blocks=1 00:12:55.474 --rc geninfo_unexecuted_blocks=1 00:12:55.474 00:12:55.474 ' 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:55.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.474 --rc genhtml_branch_coverage=1 00:12:55.474 --rc genhtml_function_coverage=1 00:12:55.474 --rc genhtml_legend=1 00:12:55.474 --rc geninfo_all_blocks=1 00:12:55.474 --rc geninfo_unexecuted_blocks=1 00:12:55.474 00:12:55.474 ' 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.474 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.735 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:55.735 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:55.735 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:55.735 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:03.993 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:03.993 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:03.993 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:03.993 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:03.993 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:03.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:13:03.994 00:13:03.994 --- 10.0.0.2 ping statistics --- 00:13:03.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.994 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:13:03.994 00:13:03.994 --- 10.0.0.1 ping statistics --- 00:13:03.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.994 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=615524 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 615524 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 615524 ']' 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.994 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.994 [2024-11-20 08:58:28.537411] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:13:03.994 [2024-11-20 08:58:28.537480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.994 [2024-11-20 08:58:28.638820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.994 [2024-11-20 08:58:28.691251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.994 [2024-11-20 08:58:28.691304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.994 [2024-11-20 08:58:28.691312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.994 [2024-11-20 08:58:28.691320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.994 [2024-11-20 08:58:28.691326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.994 [2024-11-20 08:58:28.693719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.994 [2024-11-20 08:58:28.693880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.994 [2024-11-20 08:58:28.694041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.994 [2024-11-20 08:58:28.694042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.994 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.994 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:03.994 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.994 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:03.994 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.994 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.994 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:03.994 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27986 00:13:04.255 [2024-11-20 08:58:29.570702] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:04.255 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:04.255 { 00:13:04.255 "nqn": "nqn.2016-06.io.spdk:cnode27986", 00:13:04.255 "tgt_name": "foobar", 00:13:04.255 "method": "nvmf_create_subsystem", 00:13:04.255 "req_id": 1 00:13:04.255 } 00:13:04.255 Got JSON-RPC error response 00:13:04.255 response: 00:13:04.255 { 00:13:04.255 "code": -32603, 00:13:04.255 "message": "Unable to find target foobar" 00:13:04.255 }' 00:13:04.255 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:04.255 { 00:13:04.255 "nqn": "nqn.2016-06.io.spdk:cnode27986", 00:13:04.255 "tgt_name": "foobar", 00:13:04.255 "method": "nvmf_create_subsystem", 00:13:04.255 "req_id": 1 00:13:04.255 } 00:13:04.255 Got JSON-RPC error response 00:13:04.255 response: 00:13:04.255 { 00:13:04.255 "code": -32603, 00:13:04.255 "message": "Unable to find target foobar" 00:13:04.255 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:04.255 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:04.255 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25115 00:13:04.255 [2024-11-20 08:58:29.779608] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25115: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:04.516 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:04.516 { 00:13:04.516 "nqn": "nqn.2016-06.io.spdk:cnode25115", 00:13:04.516 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:04.516 "method": "nvmf_create_subsystem", 00:13:04.516 "req_id": 1 00:13:04.516 } 00:13:04.516 Got JSON-RPC error response 00:13:04.516 response: 00:13:04.516 { 00:13:04.516 "code": -32602, 00:13:04.516 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:04.516 }' 00:13:04.516 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:04.516 { 00:13:04.516 "nqn": "nqn.2016-06.io.spdk:cnode25115", 00:13:04.516 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:04.516 "method": "nvmf_create_subsystem", 00:13:04.516 "req_id": 1 00:13:04.516 } 00:13:04.516 Got JSON-RPC error response 00:13:04.516 response: 00:13:04.516 { 00:13:04.516 "code": -32602, 00:13:04.516 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:04.516 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:04.516 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:04.516 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11135 00:13:04.516 [2024-11-20 08:58:29.988346] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11135: invalid model number 'SPDK_Controller' 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:04.516 { 00:13:04.516 "nqn": "nqn.2016-06.io.spdk:cnode11135", 00:13:04.516 "model_number": "SPDK_Controller\u001f", 00:13:04.516 "method": "nvmf_create_subsystem", 00:13:04.516 "req_id": 1 00:13:04.516 } 00:13:04.516 Got JSON-RPC error response 00:13:04.516 response: 00:13:04.516 { 00:13:04.516 "code": -32602, 00:13:04.516 "message": "Invalid MN SPDK_Controller\u001f" 00:13:04.516 }' 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:04.516 { 00:13:04.516 "nqn": "nqn.2016-06.io.spdk:cnode11135", 00:13:04.516 "model_number": "SPDK_Controller\u001f", 00:13:04.516 "method": "nvmf_create_subsystem", 00:13:04.516 "req_id": 1 00:13:04.516 } 00:13:04.516 Got JSON-RPC error response 00:13:04.516 response: 00:13:04.516 { 00:13:04.516 "code": -32602, 00:13:04.516 "message": "Invalid MN SPDK_Controller\u001f" 00:13:04.516 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.516 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:04.777 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'cYc@XVMOYe,wH!!J`A"a"' 00:13:04.778 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'cYc@XVMOYe,wH!!J`A"a"' nqn.2016-06.io.spdk:cnode30836 00:13:05.040 [2024-11-20 08:58:30.369801] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30836: invalid serial number 'cYc@XVMOYe,wH!!J`A"a"' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:05.040 { 00:13:05.040 "nqn": "nqn.2016-06.io.spdk:cnode30836", 00:13:05.040 "serial_number": "cYc@XVMOYe,wH!!J`A\"a\"", 00:13:05.040 "method": "nvmf_create_subsystem", 00:13:05.040 "req_id": 1 00:13:05.040 } 00:13:05.040 Got JSON-RPC error response 00:13:05.040 response: 00:13:05.040 { 00:13:05.040 "code": -32602, 00:13:05.040 "message": "Invalid SN cYc@XVMOYe,wH!!J`A\"a\"" 00:13:05.040 }' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:05.040 { 00:13:05.040 "nqn": "nqn.2016-06.io.spdk:cnode30836", 00:13:05.040 "serial_number": "cYc@XVMOYe,wH!!J`A\"a\"", 00:13:05.040 "method": "nvmf_create_subsystem", 00:13:05.040 "req_id": 1 00:13:05.040 } 00:13:05.040 Got JSON-RPC error response 00:13:05.040 response: 00:13:05.040 { 00:13:05.040 "code": -32602, 00:13:05.040 "message": "Invalid SN cYc@XVMOYe,wH!!J`A\"a\"" 00:13:05.040 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:05.040 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.041 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.041 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:05.041 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:05.302 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '8,bXN?G3aXrOiD2t +A`K`G[$CmUWd5.=' 00:13:05.303 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '8,bXN?G3aXrOiD2t +A`K`G[$CmUWd5.=' nqn.2016-06.io.spdk:cnode31560 00:13:05.564 [2024-11-20 08:58:30.911891] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31560: invalid model number '8,bXN?G3aXrOiD2t +A`K`G[$CmUWd5.=' 00:13:05.564 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:05.564 { 00:13:05.564 "nqn": "nqn.2016-06.io.spdk:cnode31560", 00:13:05.564 "model_number": "8,bXN?G3aXrOiD2t +A`K`G[$CmUWd5.=", 00:13:05.564 "method": "nvmf_create_subsystem", 00:13:05.564 "req_id": 1 00:13:05.564 } 00:13:05.564 Got JSON-RPC error response 00:13:05.564 response: 00:13:05.564 { 00:13:05.564 "code": -32602, 00:13:05.564 "message": "Invalid MN 8,bXN?G3aXrOiD2t +A`K`G[$CmUWd5.=" 00:13:05.564 }' 00:13:05.564 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:05.564 { 00:13:05.564 "nqn": "nqn.2016-06.io.spdk:cnode31560", 00:13:05.564 "model_number": "8,bXN?G3aXrOiD2t +A`K`G[$CmUWd5.=", 00:13:05.564 "method": "nvmf_create_subsystem", 00:13:05.564 "req_id": 1 00:13:05.564 } 00:13:05.564 Got JSON-RPC error response 00:13:05.564 response: 00:13:05.564 { 00:13:05.564 "code": -32602, 00:13:05.564 "message": "Invalid MN 8,bXN?G3aXrOiD2t +A`K`G[$CmUWd5.=" 00:13:05.564 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:05.564 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:05.826 [2024-11-20 08:58:31.112762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.826 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:05.826 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:06.086 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:06.086 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:06.086 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:06.086 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:06.086 [2024-11-20 08:58:31.526370] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:06.086 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:06.086 { 00:13:06.086 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:06.086 "listen_address": { 00:13:06.086 "trtype": "tcp", 00:13:06.086 "traddr": "", 00:13:06.086 "trsvcid": "4421" 00:13:06.086 }, 00:13:06.086 "method": "nvmf_subsystem_remove_listener", 00:13:06.086 "req_id": 1 00:13:06.086 } 00:13:06.086 Got JSON-RPC error response 00:13:06.086 response: 00:13:06.086 { 00:13:06.086 "code": -32602, 00:13:06.086 "message": "Invalid parameters" 00:13:06.086 }' 00:13:06.086 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:06.086 { 00:13:06.086 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:06.086 "listen_address": { 00:13:06.086 "trtype": "tcp", 00:13:06.086 "traddr": "", 00:13:06.086 "trsvcid": "4421" 00:13:06.086 }, 00:13:06.086 "method": "nvmf_subsystem_remove_listener", 00:13:06.086 "req_id": 1 00:13:06.086 } 00:13:06.086 Got JSON-RPC error response 00:13:06.086 response: 00:13:06.086 { 00:13:06.086 "code": -32602, 00:13:06.086 "message": "Invalid parameters" 00:13:06.086 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:06.086 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16348 -i 0 00:13:06.347 [2024-11-20 08:58:31.714998] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16348: invalid cntlid range [0-65519] 00:13:06.347 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:06.347 { 00:13:06.347 "nqn": "nqn.2016-06.io.spdk:cnode16348", 00:13:06.347 "min_cntlid": 0, 00:13:06.347 "method": "nvmf_create_subsystem", 00:13:06.347 "req_id": 1 00:13:06.347 } 00:13:06.347 Got JSON-RPC error response 00:13:06.347 response: 00:13:06.347 { 00:13:06.347 "code": -32602, 00:13:06.347 "message": "Invalid cntlid range [0-65519]" 00:13:06.347 }' 00:13:06.347 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:06.347 { 00:13:06.347 "nqn": "nqn.2016-06.io.spdk:cnode16348", 00:13:06.347 "min_cntlid": 0, 00:13:06.347 "method": "nvmf_create_subsystem", 00:13:06.347 "req_id": 1 00:13:06.347 } 00:13:06.347 Got JSON-RPC error response 00:13:06.347 response: 00:13:06.347 { 00:13:06.347 "code": -32602, 00:13:06.347 "message": "Invalid cntlid range [0-65519]" 00:13:06.347 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.347 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19064 -i 65520 00:13:06.607 [2024-11-20 08:58:31.895535] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19064: invalid cntlid range [65520-65519] 00:13:06.607 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:06.607 { 00:13:06.607 "nqn": "nqn.2016-06.io.spdk:cnode19064", 00:13:06.607 "min_cntlid": 65520, 00:13:06.607 "method": "nvmf_create_subsystem", 00:13:06.607 "req_id": 1 00:13:06.607 } 00:13:06.607 Got JSON-RPC error response 00:13:06.607 response: 00:13:06.607 { 00:13:06.607 "code": -32602, 00:13:06.607 "message": "Invalid cntlid range [65520-65519]" 00:13:06.607 }' 00:13:06.607 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:06.607 { 00:13:06.607 "nqn": "nqn.2016-06.io.spdk:cnode19064", 00:13:06.607 "min_cntlid": 65520, 00:13:06.607 "method": "nvmf_create_subsystem", 00:13:06.607 "req_id": 1 00:13:06.607 } 00:13:06.607 Got JSON-RPC error response 00:13:06.607 response: 00:13:06.607 { 00:13:06.607 "code": -32602, 00:13:06.607 "message": "Invalid cntlid range [65520-65519]" 00:13:06.607 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.607 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12586 -I 0 00:13:06.607 [2024-11-20 08:58:32.080098] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12586: invalid cntlid range [1-0] 00:13:06.607 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:06.607 { 00:13:06.607 "nqn": "nqn.2016-06.io.spdk:cnode12586", 00:13:06.607 "max_cntlid": 0, 00:13:06.607 "method": "nvmf_create_subsystem", 00:13:06.607 "req_id": 1 00:13:06.607 } 00:13:06.607 Got JSON-RPC error response 00:13:06.607 response: 00:13:06.607 { 00:13:06.607 "code": -32602, 00:13:06.607 "message": "Invalid cntlid range [1-0]" 00:13:06.607 }' 00:13:06.607 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:06.607 { 00:13:06.607 "nqn": "nqn.2016-06.io.spdk:cnode12586", 00:13:06.607 "max_cntlid": 0, 00:13:06.607 "method": "nvmf_create_subsystem", 00:13:06.607 "req_id": 1 00:13:06.607 } 00:13:06.607 Got JSON-RPC error response 00:13:06.607 response: 00:13:06.607 { 00:13:06.607 "code": -32602, 00:13:06.607 "message": "Invalid cntlid range [1-0]" 00:13:06.607 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.607 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6970 -I 65520 00:13:06.868 [2024-11-20 08:58:32.260697] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6970: invalid cntlid range [1-65520] 00:13:06.868 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:06.868 { 00:13:06.868 "nqn": "nqn.2016-06.io.spdk:cnode6970", 00:13:06.868 "max_cntlid": 65520, 00:13:06.868 "method": "nvmf_create_subsystem", 00:13:06.868 "req_id": 1 00:13:06.868 } 00:13:06.868 Got JSON-RPC error response 00:13:06.868 response: 00:13:06.868 { 00:13:06.868 "code": -32602, 00:13:06.868 "message": "Invalid cntlid range [1-65520]" 00:13:06.868 }' 00:13:06.868 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:06.868 { 00:13:06.868 "nqn": "nqn.2016-06.io.spdk:cnode6970", 00:13:06.868 "max_cntlid": 65520, 00:13:06.868 "method": "nvmf_create_subsystem", 00:13:06.868 "req_id": 1 00:13:06.868 } 00:13:06.868 Got JSON-RPC error response 00:13:06.868 response: 00:13:06.868 { 00:13:06.868 "code": -32602, 00:13:06.868 "message": "Invalid cntlid range [1-65520]" 00:13:06.868 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.868 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26490 -i 6 -I 5 00:13:07.128 [2024-11-20 08:58:32.445307] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26490: invalid cntlid range [6-5] 00:13:07.128 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:07.128 { 00:13:07.128 "nqn": "nqn.2016-06.io.spdk:cnode26490", 00:13:07.128 "min_cntlid": 6, 00:13:07.128 "max_cntlid": 5, 00:13:07.128 "method": "nvmf_create_subsystem", 00:13:07.128 "req_id": 1 00:13:07.128 } 00:13:07.128 Got JSON-RPC error response 00:13:07.128 response: 00:13:07.128 { 00:13:07.128 "code": -32602, 00:13:07.128 "message": "Invalid cntlid range [6-5]" 00:13:07.128 }' 00:13:07.128 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:07.128 { 00:13:07.128 "nqn": "nqn.2016-06.io.spdk:cnode26490", 00:13:07.128 "min_cntlid": 6, 00:13:07.128 "max_cntlid": 5, 00:13:07.128 "method": "nvmf_create_subsystem", 00:13:07.128 "req_id": 1 00:13:07.128 } 00:13:07.128 Got JSON-RPC error response 00:13:07.128 response: 00:13:07.128 { 00:13:07.128 "code": -32602, 00:13:07.128 "message": "Invalid cntlid range [6-5]" 00:13:07.128 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:07.128 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:07.129 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:07.129 { 00:13:07.129 "name": "foobar", 00:13:07.129 "method": "nvmf_delete_target", 00:13:07.129 "req_id": 1 00:13:07.129 } 00:13:07.129 Got JSON-RPC error response 00:13:07.129 response: 00:13:07.129 { 00:13:07.129 "code": -32602, 00:13:07.129 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:07.129 }' 00:13:07.129 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:07.129 { 00:13:07.129 "name": "foobar", 00:13:07.129 "method": "nvmf_delete_target", 00:13:07.129 "req_id": 1 00:13:07.129 } 00:13:07.129 Got JSON-RPC error response 00:13:07.129 response: 00:13:07.129 { 00:13:07.129 "code": -32602, 00:13:07.129 "message": "The specified target doesn't exist, cannot delete it." 00:13:07.129 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:07.129 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:07.129 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:07.129 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:07.129 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:07.129 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:07.129 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:07.129 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:07.129 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:07.129 rmmod nvme_tcp 00:13:07.129 rmmod nvme_fabrics 00:13:07.129 rmmod nvme_keyring 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 615524 ']' 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 615524 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 615524 ']' 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 615524 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 615524 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 615524' 00:13:07.389 killing process with pid 615524 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 615524 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 615524 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.389 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.390 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.933 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:09.933 00:13:09.933 real 0m14.157s 00:13:09.933 user 0m21.198s 00:13:09.933 sys 0m6.715s 00:13:09.933 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.933 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.933 ************************************ 00:13:09.933 END TEST nvmf_invalid 00:13:09.933 ************************************ 00:13:09.933 08:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:09.933 08:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:09.933 08:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.933 08:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.933 ************************************ 00:13:09.933 START TEST nvmf_connect_stress 00:13:09.933 ************************************ 00:13:09.933 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:09.933 * Looking for test storage... 00:13:09.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.933 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:09.933 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:09.933 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:09.933 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:09.933 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.933 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.933 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:09.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.934 --rc genhtml_branch_coverage=1 00:13:09.934 --rc genhtml_function_coverage=1 00:13:09.934 --rc genhtml_legend=1 00:13:09.934 --rc geninfo_all_blocks=1 00:13:09.934 --rc geninfo_unexecuted_blocks=1 00:13:09.934 00:13:09.934 ' 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:09.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.934 --rc genhtml_branch_coverage=1 00:13:09.934 --rc genhtml_function_coverage=1 00:13:09.934 --rc genhtml_legend=1 00:13:09.934 --rc geninfo_all_blocks=1 00:13:09.934 --rc geninfo_unexecuted_blocks=1 00:13:09.934 00:13:09.934 ' 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:09.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.934 --rc genhtml_branch_coverage=1 00:13:09.934 --rc genhtml_function_coverage=1 00:13:09.934 --rc genhtml_legend=1 00:13:09.934 --rc geninfo_all_blocks=1 00:13:09.934 --rc geninfo_unexecuted_blocks=1 00:13:09.934 00:13:09.934 ' 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:09.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.934 --rc genhtml_branch_coverage=1 00:13:09.934 --rc genhtml_function_coverage=1 00:13:09.934 --rc genhtml_legend=1 00:13:09.934 --rc geninfo_all_blocks=1 00:13:09.934 --rc geninfo_unexecuted_blocks=1 00:13:09.934 00:13:09.934 ' 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.934 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.935 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.935 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:09.935 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:09.935 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:09.935 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.077 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:18.078 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:18.078 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:18.078 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:18.078 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.078 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:18.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:13:18.079 00:13:18.079 --- 10.0.0.2 ping statistics --- 00:13:18.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.079 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:13:18.079 00:13:18.079 --- 10.0.0.1 ping statistics --- 00:13:18.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.079 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=620711 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 620711 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 620711 ']' 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.079 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.079 [2024-11-20 08:58:42.750815] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:13:18.079 [2024-11-20 08:58:42.750881] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.079 [2024-11-20 08:58:42.851428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:18.079 [2024-11-20 08:58:42.902909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.079 [2024-11-20 08:58:42.902964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.079 [2024-11-20 08:58:42.902973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.079 [2024-11-20 08:58:42.902981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.079 [2024-11-20 08:58:42.902987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.079 [2024-11-20 08:58:42.904823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.079 [2024-11-20 08:58:42.904986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.079 [2024-11-20 08:58:42.904986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.079 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.079 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:18.079 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:18.079 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:18.079 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.342 [2024-11-20 08:58:43.635960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.342 [2024-11-20 08:58:43.657715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.342 NULL1 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=621031 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.342 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.604 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.604 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:18.604 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.604 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.604 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.176 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.176 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:19.176 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.176 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.176 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.437 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.437 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:19.438 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.438 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.438 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.699 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.699 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:19.699 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.699 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.699 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.960 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.960 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:19.960 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.960 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.960 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.221 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.221 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:20.221 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.221 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.221 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.793 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.793 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:20.793 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.793 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.793 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.053 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.054 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:21.054 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.054 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.054 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.314 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.314 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:21.314 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.314 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.314 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.574 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.574 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:21.574 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.574 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.574 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.835 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.835 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:21.835 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.835 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.835 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.407 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.407 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:22.407 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.407 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.407 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:22.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.928 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.928 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:22.928 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.928 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.928 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.188 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.188 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:23.188 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.188 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.188 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.449 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.449 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:23.449 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.449 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.449 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.021 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.021 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:24.021 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.021 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.021 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.281 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.281 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:24.281 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.281 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.281 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.543 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.543 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:24.543 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.543 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.543 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.804 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.804 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:24.804 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.804 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.804 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.065 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.065 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:25.065 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.065 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.065 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.326 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.326 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:25.326 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.326 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.326 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.896 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.896 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:25.896 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.896 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.896 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.157 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.157 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:26.157 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.157 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.157 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.418 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.418 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:26.418 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.418 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.418 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.679 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.679 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:26.679 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.679 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.679 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.940 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.940 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:26.940 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.940 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.940 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.511 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.511 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:27.511 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.511 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.511 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.771 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.771 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:27.771 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.771 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.771 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.029 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.029 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:28.029 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.029 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.029 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.289 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.289 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:28.289 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.289 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.289 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.550 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 621031 00:13:28.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (621031) - No such process 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 621031 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:28.550 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:28.810 rmmod nvme_tcp 00:13:28.810 rmmod nvme_fabrics 00:13:28.810 rmmod nvme_keyring 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 620711 ']' 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 620711 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 620711 ']' 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 620711 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 620711 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 620711' 00:13:28.810 killing process with pid 620711 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 620711 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 620711 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.810 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:31.353 00:13:31.353 real 0m21.395s 00:13:31.353 user 0m42.730s 00:13:31.353 sys 0m9.361s 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.353 ************************************ 00:13:31.353 END TEST nvmf_connect_stress 00:13:31.353 ************************************ 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:31.353 ************************************ 00:13:31.353 START TEST nvmf_fused_ordering 00:13:31.353 ************************************ 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:31.353 * Looking for test storage... 00:13:31.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.353 --rc genhtml_branch_coverage=1 00:13:31.353 --rc genhtml_function_coverage=1 00:13:31.353 --rc genhtml_legend=1 00:13:31.353 --rc geninfo_all_blocks=1 00:13:31.353 --rc geninfo_unexecuted_blocks=1 00:13:31.353 00:13:31.353 ' 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.353 --rc genhtml_branch_coverage=1 00:13:31.353 --rc genhtml_function_coverage=1 00:13:31.353 --rc genhtml_legend=1 00:13:31.353 --rc geninfo_all_blocks=1 00:13:31.353 --rc geninfo_unexecuted_blocks=1 00:13:31.353 00:13:31.353 ' 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.353 --rc genhtml_branch_coverage=1 00:13:31.353 --rc genhtml_function_coverage=1 00:13:31.353 --rc genhtml_legend=1 00:13:31.353 --rc geninfo_all_blocks=1 00:13:31.353 --rc geninfo_unexecuted_blocks=1 00:13:31.353 00:13:31.353 ' 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.353 --rc genhtml_branch_coverage=1 00:13:31.353 --rc genhtml_function_coverage=1 00:13:31.353 --rc genhtml_legend=1 00:13:31.353 --rc geninfo_all_blocks=1 00:13:31.353 --rc geninfo_unexecuted_blocks=1 00:13:31.353 00:13:31.353 ' 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.353 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:31.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:31.354 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.488 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:39.489 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:39.489 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:39.489 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:39.489 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.489 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.489 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.489 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.489 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:39.489 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.489 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.489 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.489 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:39.489 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:39.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:13:39.490 00:13:39.490 --- 10.0.0.2 ping statistics --- 00:13:39.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.490 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:13:39.490 00:13:39.490 --- 10.0.0.1 ping statistics --- 00:13:39.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.490 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=627192 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 627192 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 627192 ']' 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.490 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.490 [2024-11-20 08:59:04.281291] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:13:39.490 [2024-11-20 08:59:04.281362] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.490 [2024-11-20 08:59:04.383420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.490 [2024-11-20 08:59:04.435384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.490 [2024-11-20 08:59:04.435433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.490 [2024-11-20 08:59:04.435446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.490 [2024-11-20 08:59:04.435454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.490 [2024-11-20 08:59:04.435460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.490 [2024-11-20 08:59:04.436260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.752 [2024-11-20 08:59:05.145876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.752 [2024-11-20 08:59:05.170145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.752 NULL1 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.752 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:39.752 [2024-11-20 08:59:05.241094] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:13:39.752 [2024-11-20 08:59:05.241134] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627439 ] 00:13:40.324 Attached to nqn.2016-06.io.spdk:cnode1 00:13:40.324 Namespace ID: 1 size: 1GB 00:13:40.324 fused_ordering(0) 00:13:40.324 fused_ordering(1) 00:13:40.324 fused_ordering(2) 00:13:40.324 fused_ordering(3) 00:13:40.324 fused_ordering(4) 00:13:40.324 fused_ordering(5) 00:13:40.324 fused_ordering(6) 00:13:40.324 fused_ordering(7) 00:13:40.324 fused_ordering(8) 00:13:40.324 fused_ordering(9) 00:13:40.324 fused_ordering(10) 00:13:40.324 fused_ordering(11) 00:13:40.324 fused_ordering(12) 00:13:40.324 fused_ordering(13) 00:13:40.324 fused_ordering(14) 00:13:40.324 fused_ordering(15) 00:13:40.324 fused_ordering(16) 00:13:40.324 fused_ordering(17) 00:13:40.324 fused_ordering(18) 00:13:40.324 fused_ordering(19) 00:13:40.324 fused_ordering(20) 00:13:40.324 fused_ordering(21) 00:13:40.324 fused_ordering(22) 00:13:40.324 fused_ordering(23) 00:13:40.324 fused_ordering(24) 00:13:40.324 fused_ordering(25) 00:13:40.324 fused_ordering(26) 00:13:40.324 fused_ordering(27) 00:13:40.324 fused_ordering(28) 00:13:40.324 fused_ordering(29) 00:13:40.324 fused_ordering(30) 00:13:40.324 fused_ordering(31) 00:13:40.324 fused_ordering(32) 00:13:40.324 fused_ordering(33) 00:13:40.324 fused_ordering(34) 00:13:40.324 fused_ordering(35) 00:13:40.324 fused_ordering(36) 00:13:40.324 fused_ordering(37) 00:13:40.324 fused_ordering(38) 00:13:40.324 fused_ordering(39) 00:13:40.324 fused_ordering(40) 00:13:40.324 fused_ordering(41) 00:13:40.324 fused_ordering(42) 00:13:40.324 fused_ordering(43) 00:13:40.324 fused_ordering(44) 00:13:40.324 fused_ordering(45) 00:13:40.324 fused_ordering(46) 00:13:40.324 fused_ordering(47) 00:13:40.324 fused_ordering(48) 00:13:40.324 fused_ordering(49) 00:13:40.324 fused_ordering(50) 00:13:40.324 fused_ordering(51) 00:13:40.324 fused_ordering(52) 00:13:40.324 fused_ordering(53) 00:13:40.324 fused_ordering(54) 00:13:40.324 fused_ordering(55) 00:13:40.324 fused_ordering(56) 00:13:40.324 fused_ordering(57) 00:13:40.324 fused_ordering(58) 00:13:40.324 fused_ordering(59) 00:13:40.324 fused_ordering(60) 00:13:40.324 fused_ordering(61) 00:13:40.324 fused_ordering(62) 00:13:40.324 fused_ordering(63) 00:13:40.324 fused_ordering(64) 00:13:40.324 fused_ordering(65) 00:13:40.324 fused_ordering(66) 00:13:40.324 fused_ordering(67) 00:13:40.324 fused_ordering(68) 00:13:40.324 fused_ordering(69) 00:13:40.324 fused_ordering(70) 00:13:40.324 fused_ordering(71) 00:13:40.324 fused_ordering(72) 00:13:40.324 fused_ordering(73) 00:13:40.324 fused_ordering(74) 00:13:40.324 fused_ordering(75) 00:13:40.324 fused_ordering(76) 00:13:40.324 fused_ordering(77) 00:13:40.324 fused_ordering(78) 00:13:40.324 fused_ordering(79) 00:13:40.324 fused_ordering(80) 00:13:40.324 fused_ordering(81) 00:13:40.324 fused_ordering(82) 00:13:40.324 fused_ordering(83) 00:13:40.324 fused_ordering(84) 00:13:40.324 fused_ordering(85) 00:13:40.324 fused_ordering(86) 00:13:40.324 fused_ordering(87) 00:13:40.324 fused_ordering(88) 00:13:40.324 fused_ordering(89) 00:13:40.324 fused_ordering(90) 00:13:40.324 fused_ordering(91) 00:13:40.324 fused_ordering(92) 00:13:40.324 fused_ordering(93) 00:13:40.324 fused_ordering(94) 00:13:40.324 fused_ordering(95) 00:13:40.324 fused_ordering(96) 00:13:40.324 fused_ordering(97) 00:13:40.324 fused_ordering(98) 00:13:40.324 fused_ordering(99) 00:13:40.324 fused_ordering(100) 00:13:40.324 fused_ordering(101) 00:13:40.325 fused_ordering(102) 00:13:40.325 fused_ordering(103) 00:13:40.325 fused_ordering(104) 00:13:40.325 fused_ordering(105) 00:13:40.325 fused_ordering(106) 00:13:40.325 fused_ordering(107) 00:13:40.325 fused_ordering(108) 00:13:40.325 fused_ordering(109) 00:13:40.325 fused_ordering(110) 00:13:40.325 fused_ordering(111) 00:13:40.325 fused_ordering(112) 00:13:40.325 fused_ordering(113) 00:13:40.325 fused_ordering(114) 00:13:40.325 fused_ordering(115) 00:13:40.325 fused_ordering(116) 00:13:40.325 fused_ordering(117) 00:13:40.325 fused_ordering(118) 00:13:40.325 fused_ordering(119) 00:13:40.325 fused_ordering(120) 00:13:40.325 fused_ordering(121) 00:13:40.325 fused_ordering(122) 00:13:40.325 fused_ordering(123) 00:13:40.325 fused_ordering(124) 00:13:40.325 fused_ordering(125) 00:13:40.325 fused_ordering(126) 00:13:40.325 fused_ordering(127) 00:13:40.325 fused_ordering(128) 00:13:40.325 fused_ordering(129) 00:13:40.325 fused_ordering(130) 00:13:40.325 fused_ordering(131) 00:13:40.325 fused_ordering(132) 00:13:40.325 fused_ordering(133) 00:13:40.325 fused_ordering(134) 00:13:40.325 fused_ordering(135) 00:13:40.325 fused_ordering(136) 00:13:40.325 fused_ordering(137) 00:13:40.325 fused_ordering(138) 00:13:40.325 fused_ordering(139) 00:13:40.325 fused_ordering(140) 00:13:40.325 fused_ordering(141) 00:13:40.325 fused_ordering(142) 00:13:40.325 fused_ordering(143) 00:13:40.325 fused_ordering(144) 00:13:40.325 fused_ordering(145) 00:13:40.325 fused_ordering(146) 00:13:40.325 fused_ordering(147) 00:13:40.325 fused_ordering(148) 00:13:40.325 fused_ordering(149) 00:13:40.325 fused_ordering(150) 00:13:40.325 fused_ordering(151) 00:13:40.325 fused_ordering(152) 00:13:40.325 fused_ordering(153) 00:13:40.325 fused_ordering(154) 00:13:40.325 fused_ordering(155) 00:13:40.325 fused_ordering(156) 00:13:40.325 fused_ordering(157) 00:13:40.325 fused_ordering(158) 00:13:40.325 fused_ordering(159) 00:13:40.325 fused_ordering(160) 00:13:40.325 fused_ordering(161) 00:13:40.325 fused_ordering(162) 00:13:40.325 fused_ordering(163) 00:13:40.325 fused_ordering(164) 00:13:40.325 fused_ordering(165) 00:13:40.325 fused_ordering(166) 00:13:40.325 fused_ordering(167) 00:13:40.325 fused_ordering(168) 00:13:40.325 fused_ordering(169) 00:13:40.325 fused_ordering(170) 00:13:40.325 fused_ordering(171) 00:13:40.325 fused_ordering(172) 00:13:40.325 fused_ordering(173) 00:13:40.325 fused_ordering(174) 00:13:40.325 fused_ordering(175) 00:13:40.325 fused_ordering(176) 00:13:40.325 fused_ordering(177) 00:13:40.325 fused_ordering(178) 00:13:40.325 fused_ordering(179) 00:13:40.325 fused_ordering(180) 00:13:40.325 fused_ordering(181) 00:13:40.325 fused_ordering(182) 00:13:40.325 fused_ordering(183) 00:13:40.325 fused_ordering(184) 00:13:40.325 fused_ordering(185) 00:13:40.325 fused_ordering(186) 00:13:40.325 fused_ordering(187) 00:13:40.325 fused_ordering(188) 00:13:40.325 fused_ordering(189) 00:13:40.325 fused_ordering(190) 00:13:40.325 fused_ordering(191) 00:13:40.325 fused_ordering(192) 00:13:40.325 fused_ordering(193) 00:13:40.325 fused_ordering(194) 00:13:40.325 fused_ordering(195) 00:13:40.325 fused_ordering(196) 00:13:40.325 fused_ordering(197) 00:13:40.325 fused_ordering(198) 00:13:40.325 fused_ordering(199) 00:13:40.325 fused_ordering(200) 00:13:40.325 fused_ordering(201) 00:13:40.325 fused_ordering(202) 00:13:40.325 fused_ordering(203) 00:13:40.325 fused_ordering(204) 00:13:40.325 fused_ordering(205) 00:13:40.897 fused_ordering(206) 00:13:40.897 fused_ordering(207) 00:13:40.897 fused_ordering(208) 00:13:40.897 fused_ordering(209) 00:13:40.897 fused_ordering(210) 00:13:40.897 fused_ordering(211) 00:13:40.897 fused_ordering(212) 00:13:40.897 fused_ordering(213) 00:13:40.897 fused_ordering(214) 00:13:40.897 fused_ordering(215) 00:13:40.897 fused_ordering(216) 00:13:40.897 fused_ordering(217) 00:13:40.897 fused_ordering(218) 00:13:40.897 fused_ordering(219) 00:13:40.897 fused_ordering(220) 00:13:40.897 fused_ordering(221) 00:13:40.897 fused_ordering(222) 00:13:40.897 fused_ordering(223) 00:13:40.897 fused_ordering(224) 00:13:40.897 fused_ordering(225) 00:13:40.897 fused_ordering(226) 00:13:40.897 fused_ordering(227) 00:13:40.897 fused_ordering(228) 00:13:40.897 fused_ordering(229) 00:13:40.897 fused_ordering(230) 00:13:40.897 fused_ordering(231) 00:13:40.897 fused_ordering(232) 00:13:40.897 fused_ordering(233) 00:13:40.897 fused_ordering(234) 00:13:40.897 fused_ordering(235) 00:13:40.897 fused_ordering(236) 00:13:40.897 fused_ordering(237) 00:13:40.897 fused_ordering(238) 00:13:40.897 fused_ordering(239) 00:13:40.897 fused_ordering(240) 00:13:40.897 fused_ordering(241) 00:13:40.897 fused_ordering(242) 00:13:40.897 fused_ordering(243) 00:13:40.897 fused_ordering(244) 00:13:40.897 fused_ordering(245) 00:13:40.897 fused_ordering(246) 00:13:40.897 fused_ordering(247) 00:13:40.897 fused_ordering(248) 00:13:40.897 fused_ordering(249) 00:13:40.897 fused_ordering(250) 00:13:40.897 fused_ordering(251) 00:13:40.897 fused_ordering(252) 00:13:40.897 fused_ordering(253) 00:13:40.897 fused_ordering(254) 00:13:40.897 fused_ordering(255) 00:13:40.897 fused_ordering(256) 00:13:40.897 fused_ordering(257) 00:13:40.897 fused_ordering(258) 00:13:40.897 fused_ordering(259) 00:13:40.897 fused_ordering(260) 00:13:40.897 fused_ordering(261) 00:13:40.897 fused_ordering(262) 00:13:40.897 fused_ordering(263) 00:13:40.897 fused_ordering(264) 00:13:40.897 fused_ordering(265) 00:13:40.897 fused_ordering(266) 00:13:40.897 fused_ordering(267) 00:13:40.897 fused_ordering(268) 00:13:40.897 fused_ordering(269) 00:13:40.897 fused_ordering(270) 00:13:40.897 fused_ordering(271) 00:13:40.897 fused_ordering(272) 00:13:40.897 fused_ordering(273) 00:13:40.897 fused_ordering(274) 00:13:40.897 fused_ordering(275) 00:13:40.897 fused_ordering(276) 00:13:40.897 fused_ordering(277) 00:13:40.897 fused_ordering(278) 00:13:40.897 fused_ordering(279) 00:13:40.897 fused_ordering(280) 00:13:40.897 fused_ordering(281) 00:13:40.897 fused_ordering(282) 00:13:40.897 fused_ordering(283) 00:13:40.897 fused_ordering(284) 00:13:40.897 fused_ordering(285) 00:13:40.897 fused_ordering(286) 00:13:40.897 fused_ordering(287) 00:13:40.897 fused_ordering(288) 00:13:40.897 fused_ordering(289) 00:13:40.897 fused_ordering(290) 00:13:40.897 fused_ordering(291) 00:13:40.897 fused_ordering(292) 00:13:40.897 fused_ordering(293) 00:13:40.897 fused_ordering(294) 00:13:40.898 fused_ordering(295) 00:13:40.898 fused_ordering(296) 00:13:40.898 fused_ordering(297) 00:13:40.898 fused_ordering(298) 00:13:40.898 fused_ordering(299) 00:13:40.898 fused_ordering(300) 00:13:40.898 fused_ordering(301) 00:13:40.898 fused_ordering(302) 00:13:40.898 fused_ordering(303) 00:13:40.898 fused_ordering(304) 00:13:40.898 fused_ordering(305) 00:13:40.898 fused_ordering(306) 00:13:40.898 fused_ordering(307) 00:13:40.898 fused_ordering(308) 00:13:40.898 fused_ordering(309) 00:13:40.898 fused_ordering(310) 00:13:40.898 fused_ordering(311) 00:13:40.898 fused_ordering(312) 00:13:40.898 fused_ordering(313) 00:13:40.898 fused_ordering(314) 00:13:40.898 fused_ordering(315) 00:13:40.898 fused_ordering(316) 00:13:40.898 fused_ordering(317) 00:13:40.898 fused_ordering(318) 00:13:40.898 fused_ordering(319) 00:13:40.898 fused_ordering(320) 00:13:40.898 fused_ordering(321) 00:13:40.898 fused_ordering(322) 00:13:40.898 fused_ordering(323) 00:13:40.898 fused_ordering(324) 00:13:40.898 fused_ordering(325) 00:13:40.898 fused_ordering(326) 00:13:40.898 fused_ordering(327) 00:13:40.898 fused_ordering(328) 00:13:40.898 fused_ordering(329) 00:13:40.898 fused_ordering(330) 00:13:40.898 fused_ordering(331) 00:13:40.898 fused_ordering(332) 00:13:40.898 fused_ordering(333) 00:13:40.898 fused_ordering(334) 00:13:40.898 fused_ordering(335) 00:13:40.898 fused_ordering(336) 00:13:40.898 fused_ordering(337) 00:13:40.898 fused_ordering(338) 00:13:40.898 fused_ordering(339) 00:13:40.898 fused_ordering(340) 00:13:40.898 fused_ordering(341) 00:13:40.898 fused_ordering(342) 00:13:40.898 fused_ordering(343) 00:13:40.898 fused_ordering(344) 00:13:40.898 fused_ordering(345) 00:13:40.898 fused_ordering(346) 00:13:40.898 fused_ordering(347) 00:13:40.898 fused_ordering(348) 00:13:40.898 fused_ordering(349) 00:13:40.898 fused_ordering(350) 00:13:40.898 fused_ordering(351) 00:13:40.898 fused_ordering(352) 00:13:40.898 fused_ordering(353) 00:13:40.898 fused_ordering(354) 00:13:40.898 fused_ordering(355) 00:13:40.898 fused_ordering(356) 00:13:40.898 fused_ordering(357) 00:13:40.898 fused_ordering(358) 00:13:40.898 fused_ordering(359) 00:13:40.898 fused_ordering(360) 00:13:40.898 fused_ordering(361) 00:13:40.898 fused_ordering(362) 00:13:40.898 fused_ordering(363) 00:13:40.898 fused_ordering(364) 00:13:40.898 fused_ordering(365) 00:13:40.898 fused_ordering(366) 00:13:40.898 fused_ordering(367) 00:13:40.898 fused_ordering(368) 00:13:40.898 fused_ordering(369) 00:13:40.898 fused_ordering(370) 00:13:40.898 fused_ordering(371) 00:13:40.898 fused_ordering(372) 00:13:40.898 fused_ordering(373) 00:13:40.898 fused_ordering(374) 00:13:40.898 fused_ordering(375) 00:13:40.898 fused_ordering(376) 00:13:40.898 fused_ordering(377) 00:13:40.898 fused_ordering(378) 00:13:40.898 fused_ordering(379) 00:13:40.898 fused_ordering(380) 00:13:40.898 fused_ordering(381) 00:13:40.898 fused_ordering(382) 00:13:40.898 fused_ordering(383) 00:13:40.898 fused_ordering(384) 00:13:40.898 fused_ordering(385) 00:13:40.898 fused_ordering(386) 00:13:40.898 fused_ordering(387) 00:13:40.898 fused_ordering(388) 00:13:40.898 fused_ordering(389) 00:13:40.898 fused_ordering(390) 00:13:40.898 fused_ordering(391) 00:13:40.898 fused_ordering(392) 00:13:40.898 fused_ordering(393) 00:13:40.898 fused_ordering(394) 00:13:40.898 fused_ordering(395) 00:13:40.898 fused_ordering(396) 00:13:40.898 fused_ordering(397) 00:13:40.898 fused_ordering(398) 00:13:40.898 fused_ordering(399) 00:13:40.898 fused_ordering(400) 00:13:40.898 fused_ordering(401) 00:13:40.898 fused_ordering(402) 00:13:40.898 fused_ordering(403) 00:13:40.898 fused_ordering(404) 00:13:40.898 fused_ordering(405) 00:13:40.898 fused_ordering(406) 00:13:40.898 fused_ordering(407) 00:13:40.898 fused_ordering(408) 00:13:40.898 fused_ordering(409) 00:13:40.898 fused_ordering(410) 00:13:41.158 fused_ordering(411) 00:13:41.158 fused_ordering(412) 00:13:41.158 fused_ordering(413) 00:13:41.158 fused_ordering(414) 00:13:41.158 fused_ordering(415) 00:13:41.158 fused_ordering(416) 00:13:41.158 fused_ordering(417) 00:13:41.158 fused_ordering(418) 00:13:41.158 fused_ordering(419) 00:13:41.158 fused_ordering(420) 00:13:41.158 fused_ordering(421) 00:13:41.158 fused_ordering(422) 00:13:41.158 fused_ordering(423) 00:13:41.158 fused_ordering(424) 00:13:41.158 fused_ordering(425) 00:13:41.158 fused_ordering(426) 00:13:41.159 fused_ordering(427) 00:13:41.159 fused_ordering(428) 00:13:41.159 fused_ordering(429) 00:13:41.159 fused_ordering(430) 00:13:41.159 fused_ordering(431) 00:13:41.159 fused_ordering(432) 00:13:41.159 fused_ordering(433) 00:13:41.159 fused_ordering(434) 00:13:41.159 fused_ordering(435) 00:13:41.159 fused_ordering(436) 00:13:41.159 fused_ordering(437) 00:13:41.159 fused_ordering(438) 00:13:41.159 fused_ordering(439) 00:13:41.159 fused_ordering(440) 00:13:41.159 fused_ordering(441) 00:13:41.159 fused_ordering(442) 00:13:41.159 fused_ordering(443) 00:13:41.159 fused_ordering(444) 00:13:41.159 fused_ordering(445) 00:13:41.159 fused_ordering(446) 00:13:41.159 fused_ordering(447) 00:13:41.159 fused_ordering(448) 00:13:41.159 fused_ordering(449) 00:13:41.159 fused_ordering(450) 00:13:41.159 fused_ordering(451) 00:13:41.159 fused_ordering(452) 00:13:41.159 fused_ordering(453) 00:13:41.159 fused_ordering(454) 00:13:41.159 fused_ordering(455) 00:13:41.159 fused_ordering(456) 00:13:41.159 fused_ordering(457) 00:13:41.159 fused_ordering(458) 00:13:41.159 fused_ordering(459) 00:13:41.159 fused_ordering(460) 00:13:41.159 fused_ordering(461) 00:13:41.159 fused_ordering(462) 00:13:41.159 fused_ordering(463) 00:13:41.159 fused_ordering(464) 00:13:41.159 fused_ordering(465) 00:13:41.159 fused_ordering(466) 00:13:41.159 fused_ordering(467) 00:13:41.159 fused_ordering(468) 00:13:41.159 fused_ordering(469) 00:13:41.159 fused_ordering(470) 00:13:41.159 fused_ordering(471) 00:13:41.159 fused_ordering(472) 00:13:41.159 fused_ordering(473) 00:13:41.159 fused_ordering(474) 00:13:41.159 fused_ordering(475) 00:13:41.159 fused_ordering(476) 00:13:41.159 fused_ordering(477) 00:13:41.159 fused_ordering(478) 00:13:41.159 fused_ordering(479) 00:13:41.159 fused_ordering(480) 00:13:41.159 fused_ordering(481) 00:13:41.159 fused_ordering(482) 00:13:41.159 fused_ordering(483) 00:13:41.159 fused_ordering(484) 00:13:41.159 fused_ordering(485) 00:13:41.159 fused_ordering(486) 00:13:41.159 fused_ordering(487) 00:13:41.159 fused_ordering(488) 00:13:41.159 fused_ordering(489) 00:13:41.159 fused_ordering(490) 00:13:41.159 fused_ordering(491) 00:13:41.159 fused_ordering(492) 00:13:41.159 fused_ordering(493) 00:13:41.159 fused_ordering(494) 00:13:41.159 fused_ordering(495) 00:13:41.159 fused_ordering(496) 00:13:41.159 fused_ordering(497) 00:13:41.159 fused_ordering(498) 00:13:41.159 fused_ordering(499) 00:13:41.159 fused_ordering(500) 00:13:41.159 fused_ordering(501) 00:13:41.159 fused_ordering(502) 00:13:41.159 fused_ordering(503) 00:13:41.159 fused_ordering(504) 00:13:41.159 fused_ordering(505) 00:13:41.159 fused_ordering(506) 00:13:41.159 fused_ordering(507) 00:13:41.159 fused_ordering(508) 00:13:41.159 fused_ordering(509) 00:13:41.159 fused_ordering(510) 00:13:41.159 fused_ordering(511) 00:13:41.159 fused_ordering(512) 00:13:41.159 fused_ordering(513) 00:13:41.159 fused_ordering(514) 00:13:41.159 fused_ordering(515) 00:13:41.159 fused_ordering(516) 00:13:41.159 fused_ordering(517) 00:13:41.159 fused_ordering(518) 00:13:41.159 fused_ordering(519) 00:13:41.159 fused_ordering(520) 00:13:41.159 fused_ordering(521) 00:13:41.159 fused_ordering(522) 00:13:41.159 fused_ordering(523) 00:13:41.159 fused_ordering(524) 00:13:41.159 fused_ordering(525) 00:13:41.159 fused_ordering(526) 00:13:41.159 fused_ordering(527) 00:13:41.159 fused_ordering(528) 00:13:41.159 fused_ordering(529) 00:13:41.159 fused_ordering(530) 00:13:41.159 fused_ordering(531) 00:13:41.159 fused_ordering(532) 00:13:41.159 fused_ordering(533) 00:13:41.159 fused_ordering(534) 00:13:41.159 fused_ordering(535) 00:13:41.159 fused_ordering(536) 00:13:41.159 fused_ordering(537) 00:13:41.159 fused_ordering(538) 00:13:41.159 fused_ordering(539) 00:13:41.159 fused_ordering(540) 00:13:41.159 fused_ordering(541) 00:13:41.159 fused_ordering(542) 00:13:41.159 fused_ordering(543) 00:13:41.159 fused_ordering(544) 00:13:41.159 fused_ordering(545) 00:13:41.159 fused_ordering(546) 00:13:41.159 fused_ordering(547) 00:13:41.159 fused_ordering(548) 00:13:41.159 fused_ordering(549) 00:13:41.159 fused_ordering(550) 00:13:41.159 fused_ordering(551) 00:13:41.159 fused_ordering(552) 00:13:41.159 fused_ordering(553) 00:13:41.159 fused_ordering(554) 00:13:41.159 fused_ordering(555) 00:13:41.159 fused_ordering(556) 00:13:41.159 fused_ordering(557) 00:13:41.159 fused_ordering(558) 00:13:41.159 fused_ordering(559) 00:13:41.159 fused_ordering(560) 00:13:41.159 fused_ordering(561) 00:13:41.159 fused_ordering(562) 00:13:41.159 fused_ordering(563) 00:13:41.159 fused_ordering(564) 00:13:41.159 fused_ordering(565) 00:13:41.159 fused_ordering(566) 00:13:41.159 fused_ordering(567) 00:13:41.159 fused_ordering(568) 00:13:41.159 fused_ordering(569) 00:13:41.159 fused_ordering(570) 00:13:41.159 fused_ordering(571) 00:13:41.159 fused_ordering(572) 00:13:41.159 fused_ordering(573) 00:13:41.159 fused_ordering(574) 00:13:41.159 fused_ordering(575) 00:13:41.159 fused_ordering(576) 00:13:41.159 fused_ordering(577) 00:13:41.159 fused_ordering(578) 00:13:41.159 fused_ordering(579) 00:13:41.159 fused_ordering(580) 00:13:41.159 fused_ordering(581) 00:13:41.159 fused_ordering(582) 00:13:41.159 fused_ordering(583) 00:13:41.159 fused_ordering(584) 00:13:41.159 fused_ordering(585) 00:13:41.159 fused_ordering(586) 00:13:41.159 fused_ordering(587) 00:13:41.159 fused_ordering(588) 00:13:41.159 fused_ordering(589) 00:13:41.159 fused_ordering(590) 00:13:41.159 fused_ordering(591) 00:13:41.159 fused_ordering(592) 00:13:41.159 fused_ordering(593) 00:13:41.159 fused_ordering(594) 00:13:41.159 fused_ordering(595) 00:13:41.159 fused_ordering(596) 00:13:41.159 fused_ordering(597) 00:13:41.159 fused_ordering(598) 00:13:41.159 fused_ordering(599) 00:13:41.159 fused_ordering(600) 00:13:41.159 fused_ordering(601) 00:13:41.159 fused_ordering(602) 00:13:41.159 fused_ordering(603) 00:13:41.159 fused_ordering(604) 00:13:41.159 fused_ordering(605) 00:13:41.159 fused_ordering(606) 00:13:41.159 fused_ordering(607) 00:13:41.159 fused_ordering(608) 00:13:41.159 fused_ordering(609) 00:13:41.159 fused_ordering(610) 00:13:41.159 fused_ordering(611) 00:13:41.159 fused_ordering(612) 00:13:41.159 fused_ordering(613) 00:13:41.159 fused_ordering(614) 00:13:41.159 fused_ordering(615) 00:13:41.731 fused_ordering(616) 00:13:41.731 fused_ordering(617) 00:13:41.731 fused_ordering(618) 00:13:41.731 fused_ordering(619) 00:13:41.731 fused_ordering(620) 00:13:41.731 fused_ordering(621) 00:13:41.731 fused_ordering(622) 00:13:41.731 fused_ordering(623) 00:13:41.731 fused_ordering(624) 00:13:41.731 fused_ordering(625) 00:13:41.731 fused_ordering(626) 00:13:41.731 fused_ordering(627) 00:13:41.731 fused_ordering(628) 00:13:41.731 fused_ordering(629) 00:13:41.731 fused_ordering(630) 00:13:41.731 fused_ordering(631) 00:13:41.731 fused_ordering(632) 00:13:41.731 fused_ordering(633) 00:13:41.731 fused_ordering(634) 00:13:41.731 fused_ordering(635) 00:13:41.731 fused_ordering(636) 00:13:41.731 fused_ordering(637) 00:13:41.731 fused_ordering(638) 00:13:41.731 fused_ordering(639) 00:13:41.731 fused_ordering(640) 00:13:41.731 fused_ordering(641) 00:13:41.731 fused_ordering(642) 00:13:41.731 fused_ordering(643) 00:13:41.731 fused_ordering(644) 00:13:41.731 fused_ordering(645) 00:13:41.731 fused_ordering(646) 00:13:41.731 fused_ordering(647) 00:13:41.731 fused_ordering(648) 00:13:41.731 fused_ordering(649) 00:13:41.731 fused_ordering(650) 00:13:41.731 fused_ordering(651) 00:13:41.731 fused_ordering(652) 00:13:41.731 fused_ordering(653) 00:13:41.731 fused_ordering(654) 00:13:41.731 fused_ordering(655) 00:13:41.731 fused_ordering(656) 00:13:41.731 fused_ordering(657) 00:13:41.731 fused_ordering(658) 00:13:41.731 fused_ordering(659) 00:13:41.731 fused_ordering(660) 00:13:41.731 fused_ordering(661) 00:13:41.731 fused_ordering(662) 00:13:41.731 fused_ordering(663) 00:13:41.731 fused_ordering(664) 00:13:41.731 fused_ordering(665) 00:13:41.731 fused_ordering(666) 00:13:41.731 fused_ordering(667) 00:13:41.731 fused_ordering(668) 00:13:41.731 fused_ordering(669) 00:13:41.731 fused_ordering(670) 00:13:41.731 fused_ordering(671) 00:13:41.731 fused_ordering(672) 00:13:41.731 fused_ordering(673) 00:13:41.731 fused_ordering(674) 00:13:41.731 fused_ordering(675) 00:13:41.731 fused_ordering(676) 00:13:41.731 fused_ordering(677) 00:13:41.731 fused_ordering(678) 00:13:41.731 fused_ordering(679) 00:13:41.731 fused_ordering(680) 00:13:41.731 fused_ordering(681) 00:13:41.731 fused_ordering(682) 00:13:41.731 fused_ordering(683) 00:13:41.731 fused_ordering(684) 00:13:41.731 fused_ordering(685) 00:13:41.731 fused_ordering(686) 00:13:41.731 fused_ordering(687) 00:13:41.731 fused_ordering(688) 00:13:41.731 fused_ordering(689) 00:13:41.731 fused_ordering(690) 00:13:41.731 fused_ordering(691) 00:13:41.731 fused_ordering(692) 00:13:41.731 fused_ordering(693) 00:13:41.731 fused_ordering(694) 00:13:41.731 fused_ordering(695) 00:13:41.731 fused_ordering(696) 00:13:41.731 fused_ordering(697) 00:13:41.731 fused_ordering(698) 00:13:41.731 fused_ordering(699) 00:13:41.731 fused_ordering(700) 00:13:41.731 fused_ordering(701) 00:13:41.731 fused_ordering(702) 00:13:41.731 fused_ordering(703) 00:13:41.731 fused_ordering(704) 00:13:41.731 fused_ordering(705) 00:13:41.731 fused_ordering(706) 00:13:41.731 fused_ordering(707) 00:13:41.731 fused_ordering(708) 00:13:41.731 fused_ordering(709) 00:13:41.731 fused_ordering(710) 00:13:41.731 fused_ordering(711) 00:13:41.731 fused_ordering(712) 00:13:41.731 fused_ordering(713) 00:13:41.731 fused_ordering(714) 00:13:41.731 fused_ordering(715) 00:13:41.731 fused_ordering(716) 00:13:41.731 fused_ordering(717) 00:13:41.731 fused_ordering(718) 00:13:41.731 fused_ordering(719) 00:13:41.731 fused_ordering(720) 00:13:41.731 fused_ordering(721) 00:13:41.731 fused_ordering(722) 00:13:41.731 fused_ordering(723) 00:13:41.731 fused_ordering(724) 00:13:41.731 fused_ordering(725) 00:13:41.731 fused_ordering(726) 00:13:41.731 fused_ordering(727) 00:13:41.731 fused_ordering(728) 00:13:41.731 fused_ordering(729) 00:13:41.731 fused_ordering(730) 00:13:41.731 fused_ordering(731) 00:13:41.731 fused_ordering(732) 00:13:41.731 fused_ordering(733) 00:13:41.731 fused_ordering(734) 00:13:41.732 fused_ordering(735) 00:13:41.732 fused_ordering(736) 00:13:41.732 fused_ordering(737) 00:13:41.732 fused_ordering(738) 00:13:41.732 fused_ordering(739) 00:13:41.732 fused_ordering(740) 00:13:41.732 fused_ordering(741) 00:13:41.732 fused_ordering(742) 00:13:41.732 fused_ordering(743) 00:13:41.732 fused_ordering(744) 00:13:41.732 fused_ordering(745) 00:13:41.732 fused_ordering(746) 00:13:41.732 fused_ordering(747) 00:13:41.732 fused_ordering(748) 00:13:41.732 fused_ordering(749) 00:13:41.732 fused_ordering(750) 00:13:41.732 fused_ordering(751) 00:13:41.732 fused_ordering(752) 00:13:41.732 fused_ordering(753) 00:13:41.732 fused_ordering(754) 00:13:41.732 fused_ordering(755) 00:13:41.732 fused_ordering(756) 00:13:41.732 fused_ordering(757) 00:13:41.732 fused_ordering(758) 00:13:41.732 fused_ordering(759) 00:13:41.732 fused_ordering(760) 00:13:41.732 fused_ordering(761) 00:13:41.732 fused_ordering(762) 00:13:41.732 fused_ordering(763) 00:13:41.732 fused_ordering(764) 00:13:41.732 fused_ordering(765) 00:13:41.732 fused_ordering(766) 00:13:41.732 fused_ordering(767) 00:13:41.732 fused_ordering(768) 00:13:41.732 fused_ordering(769) 00:13:41.732 fused_ordering(770) 00:13:41.732 fused_ordering(771) 00:13:41.732 fused_ordering(772) 00:13:41.732 fused_ordering(773) 00:13:41.732 fused_ordering(774) 00:13:41.732 fused_ordering(775) 00:13:41.732 fused_ordering(776) 00:13:41.732 fused_ordering(777) 00:13:41.732 fused_ordering(778) 00:13:41.732 fused_ordering(779) 00:13:41.732 fused_ordering(780) 00:13:41.732 fused_ordering(781) 00:13:41.732 fused_ordering(782) 00:13:41.732 fused_ordering(783) 00:13:41.732 fused_ordering(784) 00:13:41.732 fused_ordering(785) 00:13:41.732 fused_ordering(786) 00:13:41.732 fused_ordering(787) 00:13:41.732 fused_ordering(788) 00:13:41.732 fused_ordering(789) 00:13:41.732 fused_ordering(790) 00:13:41.732 fused_ordering(791) 00:13:41.732 fused_ordering(792) 00:13:41.732 fused_ordering(793) 00:13:41.732 fused_ordering(794) 00:13:41.732 fused_ordering(795) 00:13:41.732 fused_ordering(796) 00:13:41.732 fused_ordering(797) 00:13:41.732 fused_ordering(798) 00:13:41.732 fused_ordering(799) 00:13:41.732 fused_ordering(800) 00:13:41.732 fused_ordering(801) 00:13:41.732 fused_ordering(802) 00:13:41.732 fused_ordering(803) 00:13:41.732 fused_ordering(804) 00:13:41.732 fused_ordering(805) 00:13:41.732 fused_ordering(806) 00:13:41.732 fused_ordering(807) 00:13:41.732 fused_ordering(808) 00:13:41.732 fused_ordering(809) 00:13:41.732 fused_ordering(810) 00:13:41.732 fused_ordering(811) 00:13:41.732 fused_ordering(812) 00:13:41.732 fused_ordering(813) 00:13:41.732 fused_ordering(814) 00:13:41.732 fused_ordering(815) 00:13:41.732 fused_ordering(816) 00:13:41.732 fused_ordering(817) 00:13:41.732 fused_ordering(818) 00:13:41.732 fused_ordering(819) 00:13:41.732 fused_ordering(820) 00:13:42.304 fused_ordering(821) 00:13:42.304 fused_ordering(822) 00:13:42.304 fused_ordering(823) 00:13:42.304 fused_ordering(824) 00:13:42.304 fused_ordering(825) 00:13:42.304 fused_ordering(826) 00:13:42.304 fused_ordering(827) 00:13:42.304 fused_ordering(828) 00:13:42.304 fused_ordering(829) 00:13:42.304 fused_ordering(830) 00:13:42.304 fused_ordering(831) 00:13:42.304 fused_ordering(832) 00:13:42.304 fused_ordering(833) 00:13:42.304 fused_ordering(834) 00:13:42.304 fused_ordering(835) 00:13:42.304 fused_ordering(836) 00:13:42.304 fused_ordering(837) 00:13:42.304 fused_ordering(838) 00:13:42.304 fused_ordering(839) 00:13:42.304 fused_ordering(840) 00:13:42.304 fused_ordering(841) 00:13:42.304 fused_ordering(842) 00:13:42.304 fused_ordering(843) 00:13:42.304 fused_ordering(844) 00:13:42.304 fused_ordering(845) 00:13:42.304 fused_ordering(846) 00:13:42.304 fused_ordering(847) 00:13:42.304 fused_ordering(848) 00:13:42.304 fused_ordering(849) 00:13:42.304 fused_ordering(850) 00:13:42.304 fused_ordering(851) 00:13:42.304 fused_ordering(852) 00:13:42.304 fused_ordering(853) 00:13:42.304 fused_ordering(854) 00:13:42.304 fused_ordering(855) 00:13:42.304 fused_ordering(856) 00:13:42.304 fused_ordering(857) 00:13:42.304 fused_ordering(858) 00:13:42.304 fused_ordering(859) 00:13:42.304 fused_ordering(860) 00:13:42.304 fused_ordering(861) 00:13:42.304 fused_ordering(862) 00:13:42.304 fused_ordering(863) 00:13:42.304 fused_ordering(864) 00:13:42.304 fused_ordering(865) 00:13:42.304 fused_ordering(866) 00:13:42.304 fused_ordering(867) 00:13:42.304 fused_ordering(868) 00:13:42.304 fused_ordering(869) 00:13:42.304 fused_ordering(870) 00:13:42.304 fused_ordering(871) 00:13:42.304 fused_ordering(872) 00:13:42.304 fused_ordering(873) 00:13:42.304 fused_ordering(874) 00:13:42.304 fused_ordering(875) 00:13:42.304 fused_ordering(876) 00:13:42.304 fused_ordering(877) 00:13:42.304 fused_ordering(878) 00:13:42.304 fused_ordering(879) 00:13:42.304 fused_ordering(880) 00:13:42.304 fused_ordering(881) 00:13:42.304 fused_ordering(882) 00:13:42.304 fused_ordering(883) 00:13:42.304 fused_ordering(884) 00:13:42.304 fused_ordering(885) 00:13:42.304 fused_ordering(886) 00:13:42.304 fused_ordering(887) 00:13:42.304 fused_ordering(888) 00:13:42.304 fused_ordering(889) 00:13:42.304 fused_ordering(890) 00:13:42.305 fused_ordering(891) 00:13:42.305 fused_ordering(892) 00:13:42.305 fused_ordering(893) 00:13:42.305 fused_ordering(894) 00:13:42.305 fused_ordering(895) 00:13:42.305 fused_ordering(896) 00:13:42.305 fused_ordering(897) 00:13:42.305 fused_ordering(898) 00:13:42.305 fused_ordering(899) 00:13:42.305 fused_ordering(900) 00:13:42.305 fused_ordering(901) 00:13:42.305 fused_ordering(902) 00:13:42.305 fused_ordering(903) 00:13:42.305 fused_ordering(904) 00:13:42.305 fused_ordering(905) 00:13:42.305 fused_ordering(906) 00:13:42.305 fused_ordering(907) 00:13:42.305 fused_ordering(908) 00:13:42.305 fused_ordering(909) 00:13:42.305 fused_ordering(910) 00:13:42.305 fused_ordering(911) 00:13:42.305 fused_ordering(912) 00:13:42.305 fused_ordering(913) 00:13:42.305 fused_ordering(914) 00:13:42.305 fused_ordering(915) 00:13:42.305 fused_ordering(916) 00:13:42.305 fused_ordering(917) 00:13:42.305 fused_ordering(918) 00:13:42.305 fused_ordering(919) 00:13:42.305 fused_ordering(920) 00:13:42.305 fused_ordering(921) 00:13:42.305 fused_ordering(922) 00:13:42.305 fused_ordering(923) 00:13:42.305 fused_ordering(924) 00:13:42.305 fused_ordering(925) 00:13:42.305 fused_ordering(926) 00:13:42.305 fused_ordering(927) 00:13:42.305 fused_ordering(928) 00:13:42.305 fused_ordering(929) 00:13:42.305 fused_ordering(930) 00:13:42.305 fused_ordering(931) 00:13:42.305 fused_ordering(932) 00:13:42.305 fused_ordering(933) 00:13:42.305 fused_ordering(934) 00:13:42.305 fused_ordering(935) 00:13:42.305 fused_ordering(936) 00:13:42.305 fused_ordering(937) 00:13:42.305 fused_ordering(938) 00:13:42.305 fused_ordering(939) 00:13:42.305 fused_ordering(940) 00:13:42.305 fused_ordering(941) 00:13:42.305 fused_ordering(942) 00:13:42.305 fused_ordering(943) 00:13:42.305 fused_ordering(944) 00:13:42.305 fused_ordering(945) 00:13:42.305 fused_ordering(946) 00:13:42.305 fused_ordering(947) 00:13:42.305 fused_ordering(948) 00:13:42.305 fused_ordering(949) 00:13:42.305 fused_ordering(950) 00:13:42.305 fused_ordering(951) 00:13:42.305 fused_ordering(952) 00:13:42.305 fused_ordering(953) 00:13:42.305 fused_ordering(954) 00:13:42.305 fused_ordering(955) 00:13:42.305 fused_ordering(956) 00:13:42.305 fused_ordering(957) 00:13:42.305 fused_ordering(958) 00:13:42.305 fused_ordering(959) 00:13:42.305 fused_ordering(960) 00:13:42.305 fused_ordering(961) 00:13:42.305 fused_ordering(962) 00:13:42.305 fused_ordering(963) 00:13:42.305 fused_ordering(964) 00:13:42.305 fused_ordering(965) 00:13:42.305 fused_ordering(966) 00:13:42.305 fused_ordering(967) 00:13:42.305 fused_ordering(968) 00:13:42.305 fused_ordering(969) 00:13:42.305 fused_ordering(970) 00:13:42.305 fused_ordering(971) 00:13:42.305 fused_ordering(972) 00:13:42.305 fused_ordering(973) 00:13:42.305 fused_ordering(974) 00:13:42.305 fused_ordering(975) 00:13:42.305 fused_ordering(976) 00:13:42.305 fused_ordering(977) 00:13:42.305 fused_ordering(978) 00:13:42.305 fused_ordering(979) 00:13:42.305 fused_ordering(980) 00:13:42.305 fused_ordering(981) 00:13:42.305 fused_ordering(982) 00:13:42.305 fused_ordering(983) 00:13:42.305 fused_ordering(984) 00:13:42.305 fused_ordering(985) 00:13:42.305 fused_ordering(986) 00:13:42.305 fused_ordering(987) 00:13:42.305 fused_ordering(988) 00:13:42.305 fused_ordering(989) 00:13:42.305 fused_ordering(990) 00:13:42.305 fused_ordering(991) 00:13:42.305 fused_ordering(992) 00:13:42.305 fused_ordering(993) 00:13:42.305 fused_ordering(994) 00:13:42.305 fused_ordering(995) 00:13:42.305 fused_ordering(996) 00:13:42.305 fused_ordering(997) 00:13:42.305 fused_ordering(998) 00:13:42.305 fused_ordering(999) 00:13:42.305 fused_ordering(1000) 00:13:42.305 fused_ordering(1001) 00:13:42.305 fused_ordering(1002) 00:13:42.305 fused_ordering(1003) 00:13:42.305 fused_ordering(1004) 00:13:42.305 fused_ordering(1005) 00:13:42.305 fused_ordering(1006) 00:13:42.305 fused_ordering(1007) 00:13:42.305 fused_ordering(1008) 00:13:42.305 fused_ordering(1009) 00:13:42.305 fused_ordering(1010) 00:13:42.305 fused_ordering(1011) 00:13:42.305 fused_ordering(1012) 00:13:42.305 fused_ordering(1013) 00:13:42.305 fused_ordering(1014) 00:13:42.305 fused_ordering(1015) 00:13:42.305 fused_ordering(1016) 00:13:42.305 fused_ordering(1017) 00:13:42.305 fused_ordering(1018) 00:13:42.305 fused_ordering(1019) 00:13:42.305 fused_ordering(1020) 00:13:42.305 fused_ordering(1021) 00:13:42.305 fused_ordering(1022) 00:13:42.305 fused_ordering(1023) 00:13:42.305 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:42.305 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:42.305 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:42.305 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:42.305 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:42.305 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:42.305 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:42.305 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:42.305 rmmod nvme_tcp 00:13:42.305 rmmod nvme_fabrics 00:13:42.566 rmmod nvme_keyring 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 627192 ']' 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 627192 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 627192 ']' 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 627192 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 627192 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 627192' 00:13:42.566 killing process with pid 627192 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 627192 00:13:42.566 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 627192 00:13:42.826 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:42.826 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:42.826 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:42.826 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:42.826 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:42.826 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:42.826 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:42.826 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:42.827 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:42.827 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.827 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.827 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.739 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:44.739 00:13:44.739 real 0m13.705s 00:13:44.739 user 0m7.304s 00:13:44.739 sys 0m7.424s 00:13:44.739 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.739 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.739 ************************************ 00:13:44.739 END TEST nvmf_fused_ordering 00:13:44.739 ************************************ 00:13:44.739 08:59:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:44.739 08:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:44.739 08:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.739 08:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.001 ************************************ 00:13:45.001 START TEST nvmf_ns_masking 00:13:45.001 ************************************ 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:45.001 * Looking for test storage... 00:13:45.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:45.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.001 --rc genhtml_branch_coverage=1 00:13:45.001 --rc genhtml_function_coverage=1 00:13:45.001 --rc genhtml_legend=1 00:13:45.001 --rc geninfo_all_blocks=1 00:13:45.001 --rc geninfo_unexecuted_blocks=1 00:13:45.001 00:13:45.001 ' 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:45.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.001 --rc genhtml_branch_coverage=1 00:13:45.001 --rc genhtml_function_coverage=1 00:13:45.001 --rc genhtml_legend=1 00:13:45.001 --rc geninfo_all_blocks=1 00:13:45.001 --rc geninfo_unexecuted_blocks=1 00:13:45.001 00:13:45.001 ' 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:45.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.001 --rc genhtml_branch_coverage=1 00:13:45.001 --rc genhtml_function_coverage=1 00:13:45.001 --rc genhtml_legend=1 00:13:45.001 --rc geninfo_all_blocks=1 00:13:45.001 --rc geninfo_unexecuted_blocks=1 00:13:45.001 00:13:45.001 ' 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:45.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.001 --rc genhtml_branch_coverage=1 00:13:45.001 --rc genhtml_function_coverage=1 00:13:45.001 --rc genhtml_legend=1 00:13:45.001 --rc geninfo_all_blocks=1 00:13:45.001 --rc geninfo_unexecuted_blocks=1 00:13:45.001 00:13:45.001 ' 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.001 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4dc8293d-80be-4344-a535-9ac724eeed3c 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6ed54e52-ef30-4be3-bf50-e38ceae53104 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:45.002 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:45.263 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f16867d6-e330-489d-b314-0e6d3bdb5570 00:13:45.263 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:45.263 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:45.263 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.263 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:45.264 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:45.264 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:45.264 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.264 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.264 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.264 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:45.264 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:45.264 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:45.264 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:53.406 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:53.407 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:53.407 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:53.407 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:53.407 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:53.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:13:53.407 00:13:53.407 --- 10.0.0.2 ping statistics --- 00:13:53.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.407 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:13:53.407 00:13:53.407 --- 10.0.0.1 ping statistics --- 00:13:53.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.407 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:53.407 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=632116 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 632116 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 632116 ']' 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.407 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:53.407 [2024-11-20 08:59:18.097982] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:13:53.407 [2024-11-20 08:59:18.098049] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.407 [2024-11-20 08:59:18.171349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.408 [2024-11-20 08:59:18.217096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.408 [2024-11-20 08:59:18.217144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.408 [2024-11-20 08:59:18.217150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.408 [2024-11-20 08:59:18.217156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.408 [2024-11-20 08:59:18.217170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.408 [2024-11-20 08:59:18.217829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.408 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.408 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:53.408 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.408 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:53.408 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:53.408 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.408 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:53.408 [2024-11-20 08:59:18.538642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.408 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:53.408 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:53.408 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:53.408 Malloc1 00:13:53.408 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:53.670 Malloc2 00:13:53.670 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:53.670 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:53.930 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.192 [2024-11-20 08:59:19.480251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.192 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:54.192 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f16867d6-e330-489d-b314-0e6d3bdb5570 -a 10.0.0.2 -s 4420 -i 4 00:13:54.192 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:54.192 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:54.192 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.192 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:54.192 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:56.741 [ 0]:0x1 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aff4e00be1d145498bf2dc819cd6e79f 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aff4e00be1d145498bf2dc819cd6e79f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:56.741 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:56.741 [ 0]:0x1 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aff4e00be1d145498bf2dc819cd6e79f 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aff4e00be1d145498bf2dc819cd6e79f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:56.741 [ 1]:0x2 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d838fcbb620c42e1926d3e2fbf7d7395 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d838fcbb620c42e1926d3e2fbf7d7395 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:56.741 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:57.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.002 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.263 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:57.524 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:57.524 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f16867d6-e330-489d-b314-0e6d3bdb5570 -a 10.0.0.2 -s 4420 -i 4 00:13:57.524 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:57.524 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:57.524 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.524 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:57.524 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:57.524 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:00.071 [ 0]:0x2 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d838fcbb620c42e1926d3e2fbf7d7395 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d838fcbb620c42e1926d3e2fbf7d7395 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.071 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:00.072 [ 0]:0x1 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aff4e00be1d145498bf2dc819cd6e79f 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aff4e00be1d145498bf2dc819cd6e79f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:00.072 [ 1]:0x2 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d838fcbb620c42e1926d3e2fbf7d7395 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d838fcbb620c42e1926d3e2fbf7d7395 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.072 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:00.334 [ 0]:0x2 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d838fcbb620c42e1926d3e2fbf7d7395 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d838fcbb620c42e1926d3e2fbf7d7395 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:00.334 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:00.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.595 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:00.595 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:00.595 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f16867d6-e330-489d-b314-0e6d3bdb5570 -a 10.0.0.2 -s 4420 -i 4 00:14:00.855 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:00.855 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:00.855 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.855 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:00.855 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:00.855 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.400 [ 0]:0x1 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aff4e00be1d145498bf2dc819cd6e79f 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aff4e00be1d145498bf2dc819cd6e79f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.400 [ 1]:0x2 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d838fcbb620c42e1926d3e2fbf7d7395 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d838fcbb620c42e1926d3e2fbf7d7395 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:03.400 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.401 [ 0]:0x2 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.401 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d838fcbb620c42e1926d3e2fbf7d7395 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d838fcbb620c42e1926d3e2fbf7d7395 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:03.661 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:03.661 [2024-11-20 08:59:29.110644] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:03.661 request: 00:14:03.661 { 00:14:03.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.661 "nsid": 2, 00:14:03.661 "host": "nqn.2016-06.io.spdk:host1", 00:14:03.661 "method": "nvmf_ns_remove_host", 00:14:03.661 "req_id": 1 00:14:03.661 } 00:14:03.661 Got JSON-RPC error response 00:14:03.661 response: 00:14:03.661 { 00:14:03.661 "code": -32602, 00:14:03.661 "message": "Invalid parameters" 00:14:03.661 } 00:14:03.661 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:03.661 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:03.661 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:03.661 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:03.661 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.662 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.662 [ 0]:0x2 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d838fcbb620c42e1926d3e2fbf7d7395 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d838fcbb620c42e1926d3e2fbf7d7395 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:03.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=634489 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 634489 /var/tmp/host.sock 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 634489 ']' 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:03.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.949 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:03.949 [2024-11-20 08:59:29.347460] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:14:03.949 [2024-11-20 08:59:29.347511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634489 ] 00:14:03.949 [2024-11-20 08:59:29.434217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.949 [2024-11-20 08:59:29.470402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.890 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.890 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:04.890 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.890 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.150 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4dc8293d-80be-4344-a535-9ac724eeed3c 00:14:05.150 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:05.150 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4DC8293D80BE4344A5359AC724EEED3C -i 00:14:05.150 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6ed54e52-ef30-4be3-bf50-e38ceae53104 00:14:05.150 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:05.150 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6ED54E52EF304BE3BF50E38CEAE53104 -i 00:14:05.417 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:05.730 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:05.730 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:05.730 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:06.054 nvme0n1 00:14:06.054 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:06.054 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:06.330 nvme1n2 00:14:06.330 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:06.330 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:06.330 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:06.330 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:06.330 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:06.591 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:06.591 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:06.591 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:06.591 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:06.591 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4dc8293d-80be-4344-a535-9ac724eeed3c == \4\d\c\8\2\9\3\d\-\8\0\b\e\-\4\3\4\4\-\a\5\3\5\-\9\a\c\7\2\4\e\e\e\d\3\c ]] 00:14:06.591 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:06.591 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:06.591 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:06.851 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6ed54e52-ef30-4be3-bf50-e38ceae53104 == \6\e\d\5\4\e\5\2\-\e\f\3\0\-\4\b\e\3\-\b\f\5\0\-\e\3\8\c\e\a\e\5\3\1\0\4 ]] 00:14:06.851 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4dc8293d-80be-4344-a535-9ac724eeed3c 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4DC8293D80BE4344A5359AC724EEED3C 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4DC8293D80BE4344A5359AC724EEED3C 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:07.112 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4DC8293D80BE4344A5359AC724EEED3C 00:14:07.373 [2024-11-20 08:59:32.712105] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:07.373 [2024-11-20 08:59:32.712133] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:07.373 [2024-11-20 08:59:32.712140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.373 request: 00:14:07.373 { 00:14:07.373 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.373 "namespace": { 00:14:07.373 "bdev_name": "invalid", 00:14:07.373 "nsid": 1, 00:14:07.373 "nguid": "4DC8293D80BE4344A5359AC724EEED3C", 00:14:07.373 "no_auto_visible": false 00:14:07.373 }, 00:14:07.373 "method": "nvmf_subsystem_add_ns", 00:14:07.373 "req_id": 1 00:14:07.373 } 00:14:07.373 Got JSON-RPC error response 00:14:07.373 response: 00:14:07.373 { 00:14:07.373 "code": -32602, 00:14:07.373 "message": "Invalid parameters" 00:14:07.373 } 00:14:07.373 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:07.373 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:07.373 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:07.373 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:07.373 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4dc8293d-80be-4344-a535-9ac724eeed3c 00:14:07.373 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:07.373 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4DC8293D80BE4344A5359AC724EEED3C -i 00:14:07.634 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:09.549 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:09.549 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:09.549 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 634489 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 634489 ']' 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 634489 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634489 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634489' 00:14:09.810 killing process with pid 634489 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 634489 00:14:09.810 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 634489 00:14:10.071 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:10.071 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:10.071 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:10.071 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:10.071 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:10.071 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:10.071 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:10.071 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:10.071 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:10.071 rmmod nvme_tcp 00:14:10.071 rmmod nvme_fabrics 00:14:10.071 rmmod nvme_keyring 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 632116 ']' 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 632116 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 632116 ']' 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 632116 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 632116 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 632116' 00:14:10.332 killing process with pid 632116 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 632116 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 632116 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.332 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.884 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:12.884 00:14:12.884 real 0m27.620s 00:14:12.884 user 0m30.960s 00:14:12.884 sys 0m8.234s 00:14:12.884 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.884 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.884 ************************************ 00:14:12.884 END TEST nvmf_ns_masking 00:14:12.884 ************************************ 00:14:12.884 08:59:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:12.884 08:59:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:12.884 08:59:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:12.884 08:59:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.884 08:59:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:12.884 ************************************ 00:14:12.884 START TEST nvmf_nvme_cli 00:14:12.884 ************************************ 00:14:12.884 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:12.884 * Looking for test storage... 00:14:12.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.884 --rc genhtml_branch_coverage=1 00:14:12.884 --rc genhtml_function_coverage=1 00:14:12.884 --rc genhtml_legend=1 00:14:12.884 --rc geninfo_all_blocks=1 00:14:12.884 --rc geninfo_unexecuted_blocks=1 00:14:12.884 00:14:12.884 ' 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.884 --rc genhtml_branch_coverage=1 00:14:12.884 --rc genhtml_function_coverage=1 00:14:12.884 --rc genhtml_legend=1 00:14:12.884 --rc geninfo_all_blocks=1 00:14:12.884 --rc geninfo_unexecuted_blocks=1 00:14:12.884 00:14:12.884 ' 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.884 --rc genhtml_branch_coverage=1 00:14:12.884 --rc genhtml_function_coverage=1 00:14:12.884 --rc genhtml_legend=1 00:14:12.884 --rc geninfo_all_blocks=1 00:14:12.884 --rc geninfo_unexecuted_blocks=1 00:14:12.884 00:14:12.884 ' 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.884 --rc genhtml_branch_coverage=1 00:14:12.884 --rc genhtml_function_coverage=1 00:14:12.884 --rc genhtml_legend=1 00:14:12.884 --rc geninfo_all_blocks=1 00:14:12.884 --rc geninfo_unexecuted_blocks=1 00:14:12.884 00:14:12.884 ' 00:14:12.884 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:12.885 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:21.032 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:21.032 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:21.033 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:21.033 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:21.033 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:21.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:14:21.033 00:14:21.033 --- 10.0.0.2 ping statistics --- 00:14:21.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.033 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:14:21.033 00:14:21.033 --- 10.0.0.1 ping statistics --- 00:14:21.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.033 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=640013 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 640013 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 640013 ']' 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.033 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.033 [2024-11-20 08:59:45.815688] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:14:21.033 [2024-11-20 08:59:45.815784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.033 [2024-11-20 08:59:45.917061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.033 [2024-11-20 08:59:45.970965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.033 [2024-11-20 08:59:45.971016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.033 [2024-11-20 08:59:45.971024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.033 [2024-11-20 08:59:45.971031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.033 [2024-11-20 08:59:45.971038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.033 [2024-11-20 08:59:45.973440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.033 [2024-11-20 08:59:45.973602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.033 [2024-11-20 08:59:45.973762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.033 [2024-11-20 08:59:45.973762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.295 [2024-11-20 08:59:46.689458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.295 Malloc0 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.295 Malloc1 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.295 [2024-11-20 08:59:46.805707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.295 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.557 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.557 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:21.557 00:14:21.557 Discovery Log Number of Records 2, Generation counter 2 00:14:21.557 =====Discovery Log Entry 0====== 00:14:21.557 trtype: tcp 00:14:21.557 adrfam: ipv4 00:14:21.557 subtype: current discovery subsystem 00:14:21.557 treq: not required 00:14:21.557 portid: 0 00:14:21.557 trsvcid: 4420 00:14:21.557 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:21.557 traddr: 10.0.0.2 00:14:21.557 eflags: explicit discovery connections, duplicate discovery information 00:14:21.557 sectype: none 00:14:21.557 =====Discovery Log Entry 1====== 00:14:21.557 trtype: tcp 00:14:21.557 adrfam: ipv4 00:14:21.557 subtype: nvme subsystem 00:14:21.557 treq: not required 00:14:21.557 portid: 0 00:14:21.557 trsvcid: 4420 00:14:21.557 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:21.557 traddr: 10.0.0.2 00:14:21.557 eflags: none 00:14:21.557 sectype: none 00:14:21.557 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:21.557 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:21.557 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:21.557 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:21.557 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:21.557 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:21.557 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:21.557 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:21.557 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:21.557 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:21.557 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.472 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:23.472 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:23.472 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.472 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:23.472 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:23.472 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:25.386 /dev/nvme0n2 ]] 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.386 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:25.387 rmmod nvme_tcp 00:14:25.387 rmmod nvme_fabrics 00:14:25.387 rmmod nvme_keyring 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 640013 ']' 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 640013 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 640013 ']' 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 640013 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.387 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 640013 00:14:25.648 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.648 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.648 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 640013' 00:14:25.648 killing process with pid 640013 00:14:25.648 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 640013 00:14:25.648 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 640013 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.648 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:28.192 00:14:28.192 real 0m15.182s 00:14:28.192 user 0m22.645s 00:14:28.192 sys 0m6.396s 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.192 ************************************ 00:14:28.192 END TEST nvmf_nvme_cli 00:14:28.192 ************************************ 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:28.192 ************************************ 00:14:28.192 START TEST nvmf_vfio_user 00:14:28.192 ************************************ 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:28.192 * Looking for test storage... 00:14:28.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:28.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.192 --rc genhtml_branch_coverage=1 00:14:28.192 --rc genhtml_function_coverage=1 00:14:28.192 --rc genhtml_legend=1 00:14:28.192 --rc geninfo_all_blocks=1 00:14:28.192 --rc geninfo_unexecuted_blocks=1 00:14:28.192 00:14:28.192 ' 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:28.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.192 --rc genhtml_branch_coverage=1 00:14:28.192 --rc genhtml_function_coverage=1 00:14:28.192 --rc genhtml_legend=1 00:14:28.192 --rc geninfo_all_blocks=1 00:14:28.192 --rc geninfo_unexecuted_blocks=1 00:14:28.192 00:14:28.192 ' 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:28.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.192 --rc genhtml_branch_coverage=1 00:14:28.192 --rc genhtml_function_coverage=1 00:14:28.192 --rc genhtml_legend=1 00:14:28.192 --rc geninfo_all_blocks=1 00:14:28.192 --rc geninfo_unexecuted_blocks=1 00:14:28.192 00:14:28.192 ' 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:28.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.192 --rc genhtml_branch_coverage=1 00:14:28.192 --rc genhtml_function_coverage=1 00:14:28.192 --rc genhtml_legend=1 00:14:28.192 --rc geninfo_all_blocks=1 00:14:28.192 --rc geninfo_unexecuted_blocks=1 00:14:28.192 00:14:28.192 ' 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.192 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:28.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=641524 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 641524' 00:14:28.193 Process pid: 641524 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 641524 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 641524 ']' 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.193 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:28.193 [2024-11-20 08:59:53.529304] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:14:28.193 [2024-11-20 08:59:53.529378] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.193 [2024-11-20 08:59:53.618368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.193 [2024-11-20 08:59:53.652902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.193 [2024-11-20 08:59:53.652936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.193 [2024-11-20 08:59:53.652942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.193 [2024-11-20 08:59:53.652947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.193 [2024-11-20 08:59:53.652951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.193 [2024-11-20 08:59:53.654479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.193 [2024-11-20 08:59:53.654631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.193 [2024-11-20 08:59:53.654781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.193 [2024-11-20 08:59:53.654783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.136 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.136 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:29.136 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:30.078 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:30.078 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:30.078 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:30.078 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:30.078 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:30.078 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:30.339 Malloc1 00:14:30.339 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:30.600 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:30.600 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:30.860 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:30.860 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:30.860 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:31.121 Malloc2 00:14:31.121 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:31.382 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:31.382 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:31.646 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:31.646 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:31.646 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:31.646 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:31.646 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:31.646 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:31.646 [2024-11-20 08:59:57.064783] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:14:31.646 [2024-11-20 08:59:57.064823] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642262 ] 00:14:31.646 [2024-11-20 08:59:57.105456] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:31.646 [2024-11-20 08:59:57.114457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:31.646 [2024-11-20 08:59:57.114475] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5978d71000 00:14:31.646 [2024-11-20 08:59:57.115462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.646 [2024-11-20 08:59:57.116467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.646 [2024-11-20 08:59:57.117473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.646 [2024-11-20 08:59:57.118474] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:31.646 [2024-11-20 08:59:57.119472] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:31.646 [2024-11-20 08:59:57.120482] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.646 [2024-11-20 08:59:57.121485] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:31.646 [2024-11-20 08:59:57.122489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.646 [2024-11-20 08:59:57.123497] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:31.646 [2024-11-20 08:59:57.123504] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5978d66000 00:14:31.646 [2024-11-20 08:59:57.124417] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:31.646 [2024-11-20 08:59:57.133879] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:31.646 [2024-11-20 08:59:57.133900] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:31.646 [2024-11-20 08:59:57.138592] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:31.646 [2024-11-20 08:59:57.138627] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:31.646 [2024-11-20 08:59:57.138684] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:31.646 [2024-11-20 08:59:57.138696] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:31.646 [2024-11-20 08:59:57.138700] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:31.646 [2024-11-20 08:59:57.139595] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:31.646 [2024-11-20 08:59:57.139602] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:31.646 [2024-11-20 08:59:57.139608] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:31.646 [2024-11-20 08:59:57.140596] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:31.646 [2024-11-20 08:59:57.140604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:31.646 [2024-11-20 08:59:57.140609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:31.646 [2024-11-20 08:59:57.141597] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:31.646 [2024-11-20 08:59:57.141603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:31.646 [2024-11-20 08:59:57.142607] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:31.646 [2024-11-20 08:59:57.142614] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:31.646 [2024-11-20 08:59:57.142618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:31.646 [2024-11-20 08:59:57.142623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:31.646 [2024-11-20 08:59:57.142729] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:31.646 [2024-11-20 08:59:57.142732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:31.646 [2024-11-20 08:59:57.142738] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:31.646 [2024-11-20 08:59:57.143619] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:31.646 [2024-11-20 08:59:57.144621] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:31.646 [2024-11-20 08:59:57.145630] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:31.646 [2024-11-20 08:59:57.146634] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:31.646 [2024-11-20 08:59:57.146685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:31.646 [2024-11-20 08:59:57.147645] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:31.646 [2024-11-20 08:59:57.147651] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:31.647 [2024-11-20 08:59:57.147654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147669] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:31.647 [2024-11-20 08:59:57.147677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147688] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:31.647 [2024-11-20 08:59:57.147691] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.647 [2024-11-20 08:59:57.147694] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.647 [2024-11-20 08:59:57.147705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.647 [2024-11-20 08:59:57.147734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:31.647 [2024-11-20 08:59:57.147741] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:31.647 [2024-11-20 08:59:57.147744] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:31.647 [2024-11-20 08:59:57.147748] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:31.647 [2024-11-20 08:59:57.147751] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:31.647 [2024-11-20 08:59:57.147756] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:31.647 [2024-11-20 08:59:57.147759] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:31.647 [2024-11-20 08:59:57.147762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:31.647 [2024-11-20 08:59:57.147790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:31.647 [2024-11-20 08:59:57.147798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.647 [2024-11-20 08:59:57.147804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.647 [2024-11-20 08:59:57.147810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.647 [2024-11-20 08:59:57.147816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.647 [2024-11-20 08:59:57.147820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:31.647 [2024-11-20 08:59:57.147839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:31.647 [2024-11-20 08:59:57.147844] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:31.647 [2024-11-20 08:59:57.147848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:31.647 [2024-11-20 08:59:57.147873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:31.647 [2024-11-20 08:59:57.147917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147928] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:31.647 [2024-11-20 08:59:57.147931] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:31.647 [2024-11-20 08:59:57.147934] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.647 [2024-11-20 08:59:57.147938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:31.647 [2024-11-20 08:59:57.147949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:31.647 [2024-11-20 08:59:57.147956] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:31.647 [2024-11-20 08:59:57.147962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.147973] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:31.647 [2024-11-20 08:59:57.147977] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.647 [2024-11-20 08:59:57.147979] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.647 [2024-11-20 08:59:57.147983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.647 [2024-11-20 08:59:57.148002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:31.647 [2024-11-20 08:59:57.148010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.148016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.148021] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:31.647 [2024-11-20 08:59:57.148024] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.647 [2024-11-20 08:59:57.148026] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.647 [2024-11-20 08:59:57.148031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.647 [2024-11-20 08:59:57.148039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:31.647 [2024-11-20 08:59:57.148044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.148049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.148055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.148059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.148062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.148066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.148070] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:31.647 [2024-11-20 08:59:57.148073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:31.647 [2024-11-20 08:59:57.148077] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:31.647 [2024-11-20 08:59:57.148090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:31.647 [2024-11-20 08:59:57.148101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:31.647 [2024-11-20 08:59:57.148109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:31.647 [2024-11-20 08:59:57.148119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:31.648 [2024-11-20 08:59:57.148129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:31.648 [2024-11-20 08:59:57.148138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:31.648 [2024-11-20 08:59:57.148146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:31.648 [2024-11-20 08:59:57.148154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:31.648 [2024-11-20 08:59:57.148168] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:31.648 [2024-11-20 08:59:57.148172] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:31.648 [2024-11-20 08:59:57.148175] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:31.648 [2024-11-20 08:59:57.148177] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:31.648 [2024-11-20 08:59:57.148179] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:31.648 [2024-11-20 08:59:57.148184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:31.648 [2024-11-20 08:59:57.148189] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:31.648 [2024-11-20 08:59:57.148192] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:31.648 [2024-11-20 08:59:57.148195] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.648 [2024-11-20 08:59:57.148199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:31.648 [2024-11-20 08:59:57.148204] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:31.648 [2024-11-20 08:59:57.148207] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.648 [2024-11-20 08:59:57.148210] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.648 [2024-11-20 08:59:57.148214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.648 [2024-11-20 08:59:57.148219] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:31.648 [2024-11-20 08:59:57.148222] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:31.648 [2024-11-20 08:59:57.148225] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.648 [2024-11-20 08:59:57.148229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:31.648 [2024-11-20 08:59:57.148234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:31.648 [2024-11-20 08:59:57.148242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:31.648 [2024-11-20 08:59:57.148249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:31.648 [2024-11-20 08:59:57.148254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:31.648 ===================================================== 00:14:31.648 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:31.648 ===================================================== 00:14:31.648 Controller Capabilities/Features 00:14:31.648 ================================ 00:14:31.648 Vendor ID: 4e58 00:14:31.648 Subsystem Vendor ID: 4e58 00:14:31.648 Serial Number: SPDK1 00:14:31.648 Model Number: SPDK bdev Controller 00:14:31.648 Firmware Version: 25.01 00:14:31.648 Recommended Arb Burst: 6 00:14:31.648 IEEE OUI Identifier: 8d 6b 50 00:14:31.648 Multi-path I/O 00:14:31.648 May have multiple subsystem ports: Yes 00:14:31.648 May have multiple controllers: Yes 00:14:31.648 Associated with SR-IOV VF: No 00:14:31.648 Max Data Transfer Size: 131072 00:14:31.648 Max Number of Namespaces: 32 00:14:31.648 Max Number of I/O Queues: 127 00:14:31.648 NVMe Specification Version (VS): 1.3 00:14:31.648 NVMe Specification Version (Identify): 1.3 00:14:31.648 Maximum Queue Entries: 256 00:14:31.648 Contiguous Queues Required: Yes 00:14:31.648 Arbitration Mechanisms Supported 00:14:31.648 Weighted Round Robin: Not Supported 00:14:31.648 Vendor Specific: Not Supported 00:14:31.648 Reset Timeout: 15000 ms 00:14:31.648 Doorbell Stride: 4 bytes 00:14:31.648 NVM Subsystem Reset: Not Supported 00:14:31.648 Command Sets Supported 00:14:31.648 NVM Command Set: Supported 00:14:31.648 Boot Partition: Not Supported 00:14:31.648 Memory Page Size Minimum: 4096 bytes 00:14:31.648 Memory Page Size Maximum: 4096 bytes 00:14:31.648 Persistent Memory Region: Not Supported 00:14:31.648 Optional Asynchronous Events Supported 00:14:31.648 Namespace Attribute Notices: Supported 00:14:31.648 Firmware Activation Notices: Not Supported 00:14:31.648 ANA Change Notices: Not Supported 00:14:31.648 PLE Aggregate Log Change Notices: Not Supported 00:14:31.648 LBA Status Info Alert Notices: Not Supported 00:14:31.648 EGE Aggregate Log Change Notices: Not Supported 00:14:31.648 Normal NVM Subsystem Shutdown event: Not Supported 00:14:31.648 Zone Descriptor Change Notices: Not Supported 00:14:31.648 Discovery Log Change Notices: Not Supported 00:14:31.648 Controller Attributes 00:14:31.648 128-bit Host Identifier: Supported 00:14:31.648 Non-Operational Permissive Mode: Not Supported 00:14:31.648 NVM Sets: Not Supported 00:14:31.648 Read Recovery Levels: Not Supported 00:14:31.648 Endurance Groups: Not Supported 00:14:31.648 Predictable Latency Mode: Not Supported 00:14:31.648 Traffic Based Keep ALive: Not Supported 00:14:31.648 Namespace Granularity: Not Supported 00:14:31.648 SQ Associations: Not Supported 00:14:31.648 UUID List: Not Supported 00:14:31.648 Multi-Domain Subsystem: Not Supported 00:14:31.648 Fixed Capacity Management: Not Supported 00:14:31.648 Variable Capacity Management: Not Supported 00:14:31.648 Delete Endurance Group: Not Supported 00:14:31.648 Delete NVM Set: Not Supported 00:14:31.648 Extended LBA Formats Supported: Not Supported 00:14:31.648 Flexible Data Placement Supported: Not Supported 00:14:31.648 00:14:31.648 Controller Memory Buffer Support 00:14:31.648 ================================ 00:14:31.648 Supported: No 00:14:31.648 00:14:31.648 Persistent Memory Region Support 00:14:31.648 ================================ 00:14:31.648 Supported: No 00:14:31.648 00:14:31.648 Admin Command Set Attributes 00:14:31.648 ============================ 00:14:31.648 Security Send/Receive: Not Supported 00:14:31.648 Format NVM: Not Supported 00:14:31.648 Firmware Activate/Download: Not Supported 00:14:31.648 Namespace Management: Not Supported 00:14:31.648 Device Self-Test: Not Supported 00:14:31.648 Directives: Not Supported 00:14:31.648 NVMe-MI: Not Supported 00:14:31.648 Virtualization Management: Not Supported 00:14:31.648 Doorbell Buffer Config: Not Supported 00:14:31.648 Get LBA Status Capability: Not Supported 00:14:31.648 Command & Feature Lockdown Capability: Not Supported 00:14:31.648 Abort Command Limit: 4 00:14:31.648 Async Event Request Limit: 4 00:14:31.648 Number of Firmware Slots: N/A 00:14:31.648 Firmware Slot 1 Read-Only: N/A 00:14:31.648 Firmware Activation Without Reset: N/A 00:14:31.648 Multiple Update Detection Support: N/A 00:14:31.648 Firmware Update Granularity: No Information Provided 00:14:31.648 Per-Namespace SMART Log: No 00:14:31.649 Asymmetric Namespace Access Log Page: Not Supported 00:14:31.649 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:31.649 Command Effects Log Page: Supported 00:14:31.649 Get Log Page Extended Data: Supported 00:14:31.649 Telemetry Log Pages: Not Supported 00:14:31.649 Persistent Event Log Pages: Not Supported 00:14:31.649 Supported Log Pages Log Page: May Support 00:14:31.649 Commands Supported & Effects Log Page: Not Supported 00:14:31.649 Feature Identifiers & Effects Log Page:May Support 00:14:31.649 NVMe-MI Commands & Effects Log Page: May Support 00:14:31.649 Data Area 4 for Telemetry Log: Not Supported 00:14:31.649 Error Log Page Entries Supported: 128 00:14:31.649 Keep Alive: Supported 00:14:31.649 Keep Alive Granularity: 10000 ms 00:14:31.649 00:14:31.649 NVM Command Set Attributes 00:14:31.649 ========================== 00:14:31.649 Submission Queue Entry Size 00:14:31.649 Max: 64 00:14:31.649 Min: 64 00:14:31.649 Completion Queue Entry Size 00:14:31.649 Max: 16 00:14:31.649 Min: 16 00:14:31.649 Number of Namespaces: 32 00:14:31.649 Compare Command: Supported 00:14:31.649 Write Uncorrectable Command: Not Supported 00:14:31.649 Dataset Management Command: Supported 00:14:31.649 Write Zeroes Command: Supported 00:14:31.649 Set Features Save Field: Not Supported 00:14:31.649 Reservations: Not Supported 00:14:31.649 Timestamp: Not Supported 00:14:31.649 Copy: Supported 00:14:31.649 Volatile Write Cache: Present 00:14:31.649 Atomic Write Unit (Normal): 1 00:14:31.649 Atomic Write Unit (PFail): 1 00:14:31.649 Atomic Compare & Write Unit: 1 00:14:31.649 Fused Compare & Write: Supported 00:14:31.649 Scatter-Gather List 00:14:31.649 SGL Command Set: Supported (Dword aligned) 00:14:31.649 SGL Keyed: Not Supported 00:14:31.649 SGL Bit Bucket Descriptor: Not Supported 00:14:31.649 SGL Metadata Pointer: Not Supported 00:14:31.649 Oversized SGL: Not Supported 00:14:31.649 SGL Metadata Address: Not Supported 00:14:31.649 SGL Offset: Not Supported 00:14:31.649 Transport SGL Data Block: Not Supported 00:14:31.649 Replay Protected Memory Block: Not Supported 00:14:31.649 00:14:31.649 Firmware Slot Information 00:14:31.649 ========================= 00:14:31.649 Active slot: 1 00:14:31.649 Slot 1 Firmware Revision: 25.01 00:14:31.649 00:14:31.649 00:14:31.649 Commands Supported and Effects 00:14:31.649 ============================== 00:14:31.649 Admin Commands 00:14:31.649 -------------- 00:14:31.649 Get Log Page (02h): Supported 00:14:31.649 Identify (06h): Supported 00:14:31.649 Abort (08h): Supported 00:14:31.649 Set Features (09h): Supported 00:14:31.649 Get Features (0Ah): Supported 00:14:31.649 Asynchronous Event Request (0Ch): Supported 00:14:31.649 Keep Alive (18h): Supported 00:14:31.649 I/O Commands 00:14:31.649 ------------ 00:14:31.649 Flush (00h): Supported LBA-Change 00:14:31.649 Write (01h): Supported LBA-Change 00:14:31.649 Read (02h): Supported 00:14:31.649 Compare (05h): Supported 00:14:31.649 Write Zeroes (08h): Supported LBA-Change 00:14:31.649 Dataset Management (09h): Supported LBA-Change 00:14:31.649 Copy (19h): Supported LBA-Change 00:14:31.649 00:14:31.649 Error Log 00:14:31.649 ========= 00:14:31.649 00:14:31.649 Arbitration 00:14:31.649 =========== 00:14:31.649 Arbitration Burst: 1 00:14:31.649 00:14:31.649 Power Management 00:14:31.649 ================ 00:14:31.649 Number of Power States: 1 00:14:31.649 Current Power State: Power State #0 00:14:31.649 Power State #0: 00:14:31.649 Max Power: 0.00 W 00:14:31.649 Non-Operational State: Operational 00:14:31.649 Entry Latency: Not Reported 00:14:31.649 Exit Latency: Not Reported 00:14:31.649 Relative Read Throughput: 0 00:14:31.649 Relative Read Latency: 0 00:14:31.649 Relative Write Throughput: 0 00:14:31.649 Relative Write Latency: 0 00:14:31.649 Idle Power: Not Reported 00:14:31.649 Active Power: Not Reported 00:14:31.649 Non-Operational Permissive Mode: Not Supported 00:14:31.649 00:14:31.649 Health Information 00:14:31.649 ================== 00:14:31.649 Critical Warnings: 00:14:31.649 Available Spare Space: OK 00:14:31.649 Temperature: OK 00:14:31.649 Device Reliability: OK 00:14:31.649 Read Only: No 00:14:31.649 Volatile Memory Backup: OK 00:14:31.649 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:31.649 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:31.649 Available Spare: 0% 00:14:31.649 Available Sp[2024-11-20 08:59:57.148325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:31.649 [2024-11-20 08:59:57.148335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:31.649 [2024-11-20 08:59:57.148354] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:31.649 [2024-11-20 08:59:57.148362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.649 [2024-11-20 08:59:57.148367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.649 [2024-11-20 08:59:57.148371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.649 [2024-11-20 08:59:57.148376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.649 [2024-11-20 08:59:57.152164] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:31.649 [2024-11-20 08:59:57.152172] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:31.649 [2024-11-20 08:59:57.152671] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:31.649 [2024-11-20 08:59:57.152712] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:31.649 [2024-11-20 08:59:57.152717] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:31.649 [2024-11-20 08:59:57.153683] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:31.649 [2024-11-20 08:59:57.153690] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:31.649 [2024-11-20 08:59:57.153741] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:31.649 [2024-11-20 08:59:57.154706] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:31.911 are Threshold: 0% 00:14:31.911 Life Percentage Used: 0% 00:14:31.912 Data Units Read: 0 00:14:31.912 Data Units Written: 0 00:14:31.912 Host Read Commands: 0 00:14:31.912 Host Write Commands: 0 00:14:31.912 Controller Busy Time: 0 minutes 00:14:31.912 Power Cycles: 0 00:14:31.912 Power On Hours: 0 hours 00:14:31.912 Unsafe Shutdowns: 0 00:14:31.912 Unrecoverable Media Errors: 0 00:14:31.912 Lifetime Error Log Entries: 0 00:14:31.912 Warning Temperature Time: 0 minutes 00:14:31.912 Critical Temperature Time: 0 minutes 00:14:31.912 00:14:31.912 Number of Queues 00:14:31.912 ================ 00:14:31.912 Number of I/O Submission Queues: 127 00:14:31.912 Number of I/O Completion Queues: 127 00:14:31.912 00:14:31.912 Active Namespaces 00:14:31.912 ================= 00:14:31.912 Namespace ID:1 00:14:31.912 Error Recovery Timeout: Unlimited 00:14:31.912 Command Set Identifier: NVM (00h) 00:14:31.912 Deallocate: Supported 00:14:31.912 Deallocated/Unwritten Error: Not Supported 00:14:31.912 Deallocated Read Value: Unknown 00:14:31.912 Deallocate in Write Zeroes: Not Supported 00:14:31.912 Deallocated Guard Field: 0xFFFF 00:14:31.912 Flush: Supported 00:14:31.912 Reservation: Supported 00:14:31.912 Namespace Sharing Capabilities: Multiple Controllers 00:14:31.912 Size (in LBAs): 131072 (0GiB) 00:14:31.912 Capacity (in LBAs): 131072 (0GiB) 00:14:31.912 Utilization (in LBAs): 131072 (0GiB) 00:14:31.912 NGUID: C90B3CB87AB949068C882C1ED8942939 00:14:31.912 UUID: c90b3cb8-7ab9-4906-8c88-2c1ed8942939 00:14:31.912 Thin Provisioning: Not Supported 00:14:31.912 Per-NS Atomic Units: Yes 00:14:31.912 Atomic Boundary Size (Normal): 0 00:14:31.912 Atomic Boundary Size (PFail): 0 00:14:31.912 Atomic Boundary Offset: 0 00:14:31.912 Maximum Single Source Range Length: 65535 00:14:31.912 Maximum Copy Length: 65535 00:14:31.912 Maximum Source Range Count: 1 00:14:31.912 NGUID/EUI64 Never Reused: No 00:14:31.912 Namespace Write Protected: No 00:14:31.912 Number of LBA Formats: 1 00:14:31.912 Current LBA Format: LBA Format #00 00:14:31.912 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:31.912 00:14:31.912 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:31.912 [2024-11-20 08:59:57.338842] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:37.205 Initializing NVMe Controllers 00:14:37.205 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.205 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:37.205 Initialization complete. Launching workers. 00:14:37.205 ======================================================== 00:14:37.205 Latency(us) 00:14:37.205 Device Information : IOPS MiB/s Average min max 00:14:37.205 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39990.17 156.21 3201.01 849.59 8942.58 00:14:37.205 ======================================================== 00:14:37.205 Total : 39990.17 156.21 3201.01 849.59 8942.58 00:14:37.205 00:14:37.205 [2024-11-20 09:00:02.359659] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.205 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:37.205 [2024-11-20 09:00:02.549489] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:42.493 Initializing NVMe Controllers 00:14:42.493 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:42.493 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:42.493 Initialization complete. Launching workers. 00:14:42.493 ======================================================== 00:14:42.493 Latency(us) 00:14:42.493 Device Information : IOPS MiB/s Average min max 00:14:42.493 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16052.77 62.71 7979.24 6982.20 8978.65 00:14:42.493 ======================================================== 00:14:42.493 Total : 16052.77 62.71 7979.24 6982.20 8978.65 00:14:42.493 00:14:42.493 [2024-11-20 09:00:07.593303] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:42.493 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:42.493 [2024-11-20 09:00:07.791114] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:47.780 [2024-11-20 09:00:12.858363] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:47.780 Initializing NVMe Controllers 00:14:47.781 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:47.781 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:47.781 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:47.781 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:47.781 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:47.781 Initialization complete. Launching workers. 00:14:47.781 Starting thread on core 2 00:14:47.781 Starting thread on core 3 00:14:47.781 Starting thread on core 1 00:14:47.781 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:47.781 [2024-11-20 09:00:13.115563] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.086 [2024-11-20 09:00:16.166530] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.086 Initializing NVMe Controllers 00:14:51.086 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.086 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.086 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:51.086 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:51.086 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:51.086 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:51.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:51.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:51.086 Initialization complete. Launching workers. 00:14:51.086 Starting thread on core 1 with urgent priority queue 00:14:51.086 Starting thread on core 2 with urgent priority queue 00:14:51.086 Starting thread on core 3 with urgent priority queue 00:14:51.086 Starting thread on core 0 with urgent priority queue 00:14:51.086 SPDK bdev Controller (SPDK1 ) core 0: 9225.67 IO/s 10.84 secs/100000 ios 00:14:51.086 SPDK bdev Controller (SPDK1 ) core 1: 14435.67 IO/s 6.93 secs/100000 ios 00:14:51.086 SPDK bdev Controller (SPDK1 ) core 2: 9582.00 IO/s 10.44 secs/100000 ios 00:14:51.086 SPDK bdev Controller (SPDK1 ) core 3: 15597.33 IO/s 6.41 secs/100000 ios 00:14:51.086 ======================================================== 00:14:51.086 00:14:51.086 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:51.086 [2024-11-20 09:00:16.407454] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.086 Initializing NVMe Controllers 00:14:51.086 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.086 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:51.086 Namespace ID: 1 size: 0GB 00:14:51.086 Initialization complete. 00:14:51.086 INFO: using host memory buffer for IO 00:14:51.086 Hello world! 00:14:51.086 [2024-11-20 09:00:16.441662] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.086 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:51.347 [2024-11-20 09:00:16.674606] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.289 Initializing NVMe Controllers 00:14:52.289 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.289 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.289 Initialization complete. Launching workers. 00:14:52.289 submit (in ns) avg, min, max = 6739.6, 2823.3, 5992088.3 00:14:52.289 complete (in ns) avg, min, max = 15908.8, 1626.7, 5991008.3 00:14:52.289 00:14:52.289 Submit histogram 00:14:52.289 ================ 00:14:52.289 Range in us Cumulative Count 00:14:52.289 2.813 - 2.827: 0.0497% ( 10) 00:14:52.289 2.827 - 2.840: 0.8892% ( 169) 00:14:52.289 2.840 - 2.853: 2.3993% ( 304) 00:14:52.289 2.853 - 2.867: 5.5636% ( 637) 00:14:52.289 2.867 - 2.880: 10.4068% ( 975) 00:14:52.290 2.880 - 2.893: 17.1725% ( 1362) 00:14:52.290 2.893 - 2.907: 22.6318% ( 1099) 00:14:52.290 2.907 - 2.920: 28.5579% ( 1193) 00:14:52.290 2.920 - 2.933: 35.1547% ( 1328) 00:14:52.290 2.933 - 2.947: 40.9567% ( 1168) 00:14:52.290 2.947 - 2.960: 47.1363% ( 1244) 00:14:52.290 2.960 - 2.973: 53.0525% ( 1191) 00:14:52.290 2.973 - 2.987: 60.9011% ( 1580) 00:14:52.290 2.987 - 3.000: 68.8689% ( 1604) 00:14:52.290 3.000 - 3.013: 76.4145% ( 1519) 00:14:52.290 3.013 - 3.027: 82.9219% ( 1310) 00:14:52.290 3.027 - 3.040: 89.2852% ( 1281) 00:14:52.290 3.040 - 3.053: 93.6864% ( 886) 00:14:52.290 3.053 - 3.067: 96.7711% ( 621) 00:14:52.290 3.067 - 3.080: 98.1422% ( 276) 00:14:52.290 3.080 - 3.093: 98.9171% ( 156) 00:14:52.290 3.093 - 3.107: 99.2996% ( 77) 00:14:52.290 3.107 - 3.120: 99.4734% ( 35) 00:14:52.290 3.120 - 3.133: 99.5629% ( 18) 00:14:52.290 3.133 - 3.147: 99.6125% ( 10) 00:14:52.290 3.147 - 3.160: 99.6274% ( 3) 00:14:52.290 3.160 - 3.173: 99.6324% ( 1) 00:14:52.290 3.267 - 3.280: 99.6374% ( 1) 00:14:52.290 3.413 - 3.440: 99.6423% ( 1) 00:14:52.290 3.947 - 3.973: 99.6473% ( 1) 00:14:52.290 4.000 - 4.027: 99.6523% ( 1) 00:14:52.290 4.080 - 4.107: 99.6572% ( 1) 00:14:52.290 4.293 - 4.320: 99.6622% ( 1) 00:14:52.290 4.373 - 4.400: 99.6672% ( 1) 00:14:52.290 4.453 - 4.480: 99.6721% ( 1) 00:14:52.290 4.640 - 4.667: 99.6771% ( 1) 00:14:52.290 4.667 - 4.693: 99.6821% ( 1) 00:14:52.290 4.720 - 4.747: 99.6920% ( 2) 00:14:52.290 4.773 - 4.800: 99.6970% ( 1) 00:14:52.290 4.800 - 4.827: 99.7069% ( 2) 00:14:52.290 4.853 - 4.880: 99.7119% ( 1) 00:14:52.290 4.880 - 4.907: 99.7268% ( 3) 00:14:52.290 4.907 - 4.933: 99.7367% ( 2) 00:14:52.290 4.960 - 4.987: 99.7417% ( 1) 00:14:52.290 4.987 - 5.013: 99.7516% ( 2) 00:14:52.290 5.040 - 5.067: 99.7616% ( 2) 00:14:52.290 5.067 - 5.093: 99.7715% ( 2) 00:14:52.290 5.093 - 5.120: 99.7814% ( 2) 00:14:52.290 5.120 - 5.147: 99.7864% ( 1) 00:14:52.290 5.147 - 5.173: 99.7963% ( 2) 00:14:52.290 5.200 - 5.227: 99.8013% ( 1) 00:14:52.290 5.227 - 5.253: 99.8063% ( 1) 00:14:52.290 5.360 - 5.387: 99.8112% ( 1) 00:14:52.290 5.387 - 5.413: 99.8162% ( 1) 00:14:52.290 5.413 - 5.440: 99.8212% ( 1) 00:14:52.290 5.467 - 5.493: 99.8261% ( 1) 00:14:52.290 5.520 - 5.547: 99.8410% ( 3) 00:14:52.290 5.573 - 5.600: 99.8460% ( 1) 00:14:52.290 5.627 - 5.653: 99.8510% ( 1) 00:14:52.290 5.707 - 5.733: 99.8609% ( 2) 00:14:52.290 5.733 - 5.760: 99.8659% ( 1) 00:14:52.290 5.760 - 5.787: 99.8708% ( 1) 00:14:52.290 5.787 - 5.813: 99.8758% ( 1) 00:14:52.290 5.867 - 5.893: 99.8808% ( 1) 00:14:52.290 5.973 - 6.000: 99.8857% ( 1) 00:14:52.290 6.133 - 6.160: 99.8907% ( 1) 00:14:52.290 6.453 - 6.480: 99.8957% ( 1) 00:14:52.290 6.747 - 6.773: 99.9007% ( 1) 00:14:52.290 6.880 - 6.933: 99.9056% ( 1) 00:14:52.290 2034.347 - 2048.000: 99.9106% ( 1) 00:14:52.290 3986.773 - 4014.080: 99.9950% ( 17) 00:14:52.290 5980.160 - 6007.467: 100.0000% ( 1) 00:14:52.290 00:14:52.290 Complete histogram 00:14:52.290 ================== 00:14:52.290 Range in us Cumulative Count 00:14:52.290 1.627 - 1.633: 0.0099% ( 2) 00:14:52.290 1.640 - 1.647: 0.4818% ( 95) 00:14:52.290 1.647 - 1.653: 0.6160% ( 27) 00:14:52.290 1.653 - 1.660: 0.6656% ( 10) 00:14:52.290 1.660 - 1.667: 0.7700% ( 21) 00:14:52.290 1.667 - [2024-11-20 09:00:17.692382] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.290 1.673: 0.8693% ( 20) 00:14:52.290 1.673 - 1.680: 0.9041% ( 7) 00:14:52.290 1.680 - 1.687: 0.9339% ( 6) 00:14:52.290 1.687 - 1.693: 1.6293% ( 140) 00:14:52.290 1.693 - 1.700: 36.6549% ( 7051) 00:14:52.290 1.700 - 1.707: 51.6517% ( 3019) 00:14:52.290 1.707 - 1.720: 71.1142% ( 3918) 00:14:52.290 1.720 - 1.733: 80.4928% ( 1888) 00:14:52.290 1.733 - 1.747: 82.8672% ( 478) 00:14:52.290 1.747 - 1.760: 85.4255% ( 515) 00:14:52.290 1.760 - 1.773: 91.2771% ( 1178) 00:14:52.290 1.773 - 1.787: 96.1602% ( 983) 00:14:52.290 1.787 - 1.800: 98.4253% ( 456) 00:14:52.290 1.800 - 1.813: 99.2698% ( 170) 00:14:52.290 1.813 - 1.827: 99.4884% ( 44) 00:14:52.290 1.827 - 1.840: 99.5082% ( 4) 00:14:52.290 3.240 - 3.253: 99.5132% ( 1) 00:14:52.290 3.520 - 3.547: 99.5182% ( 1) 00:14:52.290 3.600 - 3.627: 99.5231% ( 1) 00:14:52.290 3.653 - 3.680: 99.5281% ( 1) 00:14:52.290 3.733 - 3.760: 99.5331% ( 1) 00:14:52.290 3.760 - 3.787: 99.5430% ( 2) 00:14:52.290 3.813 - 3.840: 99.5529% ( 2) 00:14:52.290 3.840 - 3.867: 99.5629% ( 2) 00:14:52.290 3.973 - 4.000: 99.5678% ( 1) 00:14:52.290 4.000 - 4.027: 99.5728% ( 1) 00:14:52.290 4.027 - 4.053: 99.5778% ( 1) 00:14:52.290 4.080 - 4.107: 99.5827% ( 1) 00:14:52.290 4.107 - 4.133: 99.5877% ( 1) 00:14:52.290 4.133 - 4.160: 99.5927% ( 1) 00:14:52.290 4.240 - 4.267: 99.5976% ( 1) 00:14:52.290 4.320 - 4.347: 99.6026% ( 1) 00:14:52.290 4.347 - 4.373: 99.6076% ( 1) 00:14:52.290 4.373 - 4.400: 99.6125% ( 1) 00:14:52.290 4.453 - 4.480: 99.6225% ( 2) 00:14:52.290 4.533 - 4.560: 99.6274% ( 1) 00:14:52.290 4.640 - 4.667: 99.6324% ( 1) 00:14:52.290 4.720 - 4.747: 99.6374% ( 1) 00:14:52.290 4.933 - 4.960: 99.6423% ( 1) 00:14:52.290 5.387 - 5.413: 99.6473% ( 1) 00:14:52.290 3031.040 - 3044.693: 99.6523% ( 1) 00:14:52.290 3986.773 - 4014.080: 99.9901% ( 68) 00:14:52.291 4969.813 - 4997.120: 99.9950% ( 1) 00:14:52.291 5980.160 - 6007.467: 100.0000% ( 1) 00:14:52.291 00:14:52.291 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:52.291 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:52.291 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:52.291 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:52.291 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:52.553 [ 00:14:52.553 { 00:14:52.553 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:52.553 "subtype": "Discovery", 00:14:52.553 "listen_addresses": [], 00:14:52.553 "allow_any_host": true, 00:14:52.553 "hosts": [] 00:14:52.553 }, 00:14:52.553 { 00:14:52.553 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:52.553 "subtype": "NVMe", 00:14:52.553 "listen_addresses": [ 00:14:52.553 { 00:14:52.553 "trtype": "VFIOUSER", 00:14:52.553 "adrfam": "IPv4", 00:14:52.553 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:52.553 "trsvcid": "0" 00:14:52.553 } 00:14:52.553 ], 00:14:52.553 "allow_any_host": true, 00:14:52.553 "hosts": [], 00:14:52.553 "serial_number": "SPDK1", 00:14:52.553 "model_number": "SPDK bdev Controller", 00:14:52.553 "max_namespaces": 32, 00:14:52.553 "min_cntlid": 1, 00:14:52.553 "max_cntlid": 65519, 00:14:52.553 "namespaces": [ 00:14:52.553 { 00:14:52.553 "nsid": 1, 00:14:52.553 "bdev_name": "Malloc1", 00:14:52.553 "name": "Malloc1", 00:14:52.553 "nguid": "C90B3CB87AB949068C882C1ED8942939", 00:14:52.553 "uuid": "c90b3cb8-7ab9-4906-8c88-2c1ed8942939" 00:14:52.553 } 00:14:52.553 ] 00:14:52.553 }, 00:14:52.553 { 00:14:52.553 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:52.553 "subtype": "NVMe", 00:14:52.553 "listen_addresses": [ 00:14:52.553 { 00:14:52.553 "trtype": "VFIOUSER", 00:14:52.553 "adrfam": "IPv4", 00:14:52.553 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:52.553 "trsvcid": "0" 00:14:52.553 } 00:14:52.553 ], 00:14:52.553 "allow_any_host": true, 00:14:52.553 "hosts": [], 00:14:52.553 "serial_number": "SPDK2", 00:14:52.553 "model_number": "SPDK bdev Controller", 00:14:52.553 "max_namespaces": 32, 00:14:52.553 "min_cntlid": 1, 00:14:52.553 "max_cntlid": 65519, 00:14:52.553 "namespaces": [ 00:14:52.553 { 00:14:52.553 "nsid": 1, 00:14:52.553 "bdev_name": "Malloc2", 00:14:52.553 "name": "Malloc2", 00:14:52.553 "nguid": "8F83ABB9E42A4B62A9602E43EEB2CF1F", 00:14:52.553 "uuid": "8f83abb9-e42a-4b62-a960-2e43eeb2cf1f" 00:14:52.553 } 00:14:52.553 ] 00:14:52.553 } 00:14:52.553 ] 00:14:52.553 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:52.553 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=646937 00:14:52.553 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:52.553 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:52.553 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:52.553 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:52.553 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:52.553 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:52.553 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:52.553 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:52.814 [2024-11-20 09:00:18.081520] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.814 Malloc3 00:14:52.814 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:52.814 [2024-11-20 09:00:18.275877] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.814 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:52.814 Asynchronous Event Request test 00:14:52.814 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.814 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.814 Registering asynchronous event callbacks... 00:14:52.814 Starting namespace attribute notice tests for all controllers... 00:14:52.814 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:52.814 aer_cb - Changed Namespace 00:14:52.814 Cleaning up... 00:14:53.076 [ 00:14:53.076 { 00:14:53.076 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:53.076 "subtype": "Discovery", 00:14:53.076 "listen_addresses": [], 00:14:53.076 "allow_any_host": true, 00:14:53.076 "hosts": [] 00:14:53.076 }, 00:14:53.076 { 00:14:53.076 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:53.076 "subtype": "NVMe", 00:14:53.076 "listen_addresses": [ 00:14:53.076 { 00:14:53.076 "trtype": "VFIOUSER", 00:14:53.076 "adrfam": "IPv4", 00:14:53.076 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:53.076 "trsvcid": "0" 00:14:53.076 } 00:14:53.076 ], 00:14:53.076 "allow_any_host": true, 00:14:53.076 "hosts": [], 00:14:53.076 "serial_number": "SPDK1", 00:14:53.076 "model_number": "SPDK bdev Controller", 00:14:53.076 "max_namespaces": 32, 00:14:53.076 "min_cntlid": 1, 00:14:53.076 "max_cntlid": 65519, 00:14:53.076 "namespaces": [ 00:14:53.076 { 00:14:53.076 "nsid": 1, 00:14:53.076 "bdev_name": "Malloc1", 00:14:53.076 "name": "Malloc1", 00:14:53.076 "nguid": "C90B3CB87AB949068C882C1ED8942939", 00:14:53.076 "uuid": "c90b3cb8-7ab9-4906-8c88-2c1ed8942939" 00:14:53.076 }, 00:14:53.076 { 00:14:53.076 "nsid": 2, 00:14:53.076 "bdev_name": "Malloc3", 00:14:53.076 "name": "Malloc3", 00:14:53.076 "nguid": "270B2A22A47A47E9A1E2FF0D3D8CD441", 00:14:53.076 "uuid": "270b2a22-a47a-47e9-a1e2-ff0d3d8cd441" 00:14:53.076 } 00:14:53.076 ] 00:14:53.076 }, 00:14:53.076 { 00:14:53.076 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:53.076 "subtype": "NVMe", 00:14:53.076 "listen_addresses": [ 00:14:53.076 { 00:14:53.076 "trtype": "VFIOUSER", 00:14:53.076 "adrfam": "IPv4", 00:14:53.076 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:53.076 "trsvcid": "0" 00:14:53.076 } 00:14:53.076 ], 00:14:53.076 "allow_any_host": true, 00:14:53.076 "hosts": [], 00:14:53.076 "serial_number": "SPDK2", 00:14:53.076 "model_number": "SPDK bdev Controller", 00:14:53.076 "max_namespaces": 32, 00:14:53.076 "min_cntlid": 1, 00:14:53.076 "max_cntlid": 65519, 00:14:53.076 "namespaces": [ 00:14:53.076 { 00:14:53.076 "nsid": 1, 00:14:53.076 "bdev_name": "Malloc2", 00:14:53.076 "name": "Malloc2", 00:14:53.076 "nguid": "8F83ABB9E42A4B62A9602E43EEB2CF1F", 00:14:53.076 "uuid": "8f83abb9-e42a-4b62-a960-2e43eeb2cf1f" 00:14:53.076 } 00:14:53.076 ] 00:14:53.076 } 00:14:53.076 ] 00:14:53.076 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 646937 00:14:53.076 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:53.076 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:53.076 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:53.076 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:53.076 [2024-11-20 09:00:18.511659] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:14:53.076 [2024-11-20 09:00:18.511723] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647122 ] 00:14:53.076 [2024-11-20 09:00:18.549505] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:53.076 [2024-11-20 09:00:18.562395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:53.076 [2024-11-20 09:00:18.562417] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f631818e000 00:14:53.076 [2024-11-20 09:00:18.563396] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.076 [2024-11-20 09:00:18.564404] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.076 [2024-11-20 09:00:18.565412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.076 [2024-11-20 09:00:18.566420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.077 [2024-11-20 09:00:18.567429] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.077 [2024-11-20 09:00:18.568437] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.077 [2024-11-20 09:00:18.569441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.077 [2024-11-20 09:00:18.570447] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.077 [2024-11-20 09:00:18.571458] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:53.077 [2024-11-20 09:00:18.571465] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6318183000 00:14:53.077 [2024-11-20 09:00:18.572374] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:53.077 [2024-11-20 09:00:18.581748] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:53.077 [2024-11-20 09:00:18.581766] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:53.077 [2024-11-20 09:00:18.586846] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:53.077 [2024-11-20 09:00:18.586879] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:53.077 [2024-11-20 09:00:18.586934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:53.077 [2024-11-20 09:00:18.586944] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:53.077 [2024-11-20 09:00:18.586947] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:53.077 [2024-11-20 09:00:18.587843] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:53.077 [2024-11-20 09:00:18.587850] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:53.077 [2024-11-20 09:00:18.587857] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:53.077 [2024-11-20 09:00:18.588853] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:53.077 [2024-11-20 09:00:18.588859] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:53.077 [2024-11-20 09:00:18.588864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:53.077 [2024-11-20 09:00:18.589860] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:53.077 [2024-11-20 09:00:18.589866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:53.077 [2024-11-20 09:00:18.590863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:53.077 [2024-11-20 09:00:18.590870] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:53.077 [2024-11-20 09:00:18.590873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:53.077 [2024-11-20 09:00:18.590878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:53.077 [2024-11-20 09:00:18.590984] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:53.077 [2024-11-20 09:00:18.590987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:53.077 [2024-11-20 09:00:18.590991] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:53.077 [2024-11-20 09:00:18.591873] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:53.077 [2024-11-20 09:00:18.592880] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:53.077 [2024-11-20 09:00:18.593890] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:53.077 [2024-11-20 09:00:18.594892] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:53.077 [2024-11-20 09:00:18.594924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:53.077 [2024-11-20 09:00:18.595904] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:53.077 [2024-11-20 09:00:18.595911] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:53.077 [2024-11-20 09:00:18.595914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:53.077 [2024-11-20 09:00:18.595929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:53.077 [2024-11-20 09:00:18.595937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:53.077 [2024-11-20 09:00:18.595945] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.077 [2024-11-20 09:00:18.595950] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.077 [2024-11-20 09:00:18.595953] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.077 [2024-11-20 09:00:18.595962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.339 [2024-11-20 09:00:18.603163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:53.339 [2024-11-20 09:00:18.603173] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:53.339 [2024-11-20 09:00:18.603176] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:53.339 [2024-11-20 09:00:18.603179] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:53.339 [2024-11-20 09:00:18.603183] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:53.339 [2024-11-20 09:00:18.603188] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:53.339 [2024-11-20 09:00:18.603191] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:53.339 [2024-11-20 09:00:18.603195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:53.339 [2024-11-20 09:00:18.603202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:53.339 [2024-11-20 09:00:18.603209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:53.339 [2024-11-20 09:00:18.611163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:53.339 [2024-11-20 09:00:18.611172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.339 [2024-11-20 09:00:18.611178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.339 [2024-11-20 09:00:18.611184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.339 [2024-11-20 09:00:18.611190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.339 [2024-11-20 09:00:18.611193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:53.339 [2024-11-20 09:00:18.611198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:53.339 [2024-11-20 09:00:18.611205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:53.339 [2024-11-20 09:00:18.619163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:53.339 [2024-11-20 09:00:18.619170] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:53.339 [2024-11-20 09:00:18.619174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:53.339 [2024-11-20 09:00:18.619178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:53.339 [2024-11-20 09:00:18.619184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:53.339 [2024-11-20 09:00:18.619191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:53.339 [2024-11-20 09:00:18.624739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:53.339 [2024-11-20 09:00:18.624822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:53.339 [2024-11-20 09:00:18.624829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:53.339 [2024-11-20 09:00:18.624834] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:53.339 [2024-11-20 09:00:18.624837] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:53.339 [2024-11-20 09:00:18.624840] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.339 [2024-11-20 09:00:18.624845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:53.339 [2024-11-20 09:00:18.634165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:53.339 [2024-11-20 09:00:18.634174] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:53.340 [2024-11-20 09:00:18.634182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:53.340 [2024-11-20 09:00:18.634187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:53.340 [2024-11-20 09:00:18.634192] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.340 [2024-11-20 09:00:18.634195] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.340 [2024-11-20 09:00:18.634198] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.340 [2024-11-20 09:00:18.634202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.340 [2024-11-20 09:00:18.642163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:53.340 [2024-11-20 09:00:18.642173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:53.340 [2024-11-20 09:00:18.642179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:53.340 [2024-11-20 09:00:18.642184] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.340 [2024-11-20 09:00:18.642187] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.340 [2024-11-20 09:00:18.642189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.340 [2024-11-20 09:00:18.642194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.340 [2024-11-20 09:00:18.650195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:53.340 [2024-11-20 09:00:18.650203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:53.340 [2024-11-20 09:00:18.650210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:53.340 [2024-11-20 09:00:18.650216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:53.340 [2024-11-20 09:00:18.650220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:53.340 [2024-11-20 09:00:18.650223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:53.340 [2024-11-20 09:00:18.650227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:53.340 [2024-11-20 09:00:18.650230] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:53.340 [2024-11-20 09:00:18.650234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:53.340 [2024-11-20 09:00:18.650237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:53.340 [2024-11-20 09:00:18.650250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:53.340 [2024-11-20 09:00:18.658163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:53.340 [2024-11-20 09:00:18.658173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:53.340 [2024-11-20 09:00:18.666162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:53.340 [2024-11-20 09:00:18.666178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:53.340 [2024-11-20 09:00:18.674163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:53.340 [2024-11-20 09:00:18.674172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:53.340 [2024-11-20 09:00:18.682163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:53.340 [2024-11-20 09:00:18.682174] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:53.340 [2024-11-20 09:00:18.682178] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:53.340 [2024-11-20 09:00:18.682181] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:53.340 [2024-11-20 09:00:18.682183] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:53.340 [2024-11-20 09:00:18.682186] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:53.340 [2024-11-20 09:00:18.682190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:53.340 [2024-11-20 09:00:18.682196] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:53.340 [2024-11-20 09:00:18.682199] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:53.340 [2024-11-20 09:00:18.682201] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.340 [2024-11-20 09:00:18.682205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:53.340 [2024-11-20 09:00:18.682210] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:53.340 [2024-11-20 09:00:18.682215] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.340 [2024-11-20 09:00:18.682217] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.340 [2024-11-20 09:00:18.682222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.340 [2024-11-20 09:00:18.682227] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:53.340 [2024-11-20 09:00:18.682230] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:53.340 [2024-11-20 09:00:18.682232] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.340 [2024-11-20 09:00:18.682237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:53.340 [2024-11-20 09:00:18.690164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:53.340 [2024-11-20 09:00:18.690174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:53.340 [2024-11-20 09:00:18.690182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:53.340 [2024-11-20 09:00:18.690186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:53.340 ===================================================== 00:14:53.340 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:53.340 ===================================================== 00:14:53.340 Controller Capabilities/Features 00:14:53.340 ================================ 00:14:53.340 Vendor ID: 4e58 00:14:53.340 Subsystem Vendor ID: 4e58 00:14:53.340 Serial Number: SPDK2 00:14:53.340 Model Number: SPDK bdev Controller 00:14:53.340 Firmware Version: 25.01 00:14:53.340 Recommended Arb Burst: 6 00:14:53.340 IEEE OUI Identifier: 8d 6b 50 00:14:53.340 Multi-path I/O 00:14:53.340 May have multiple subsystem ports: Yes 00:14:53.340 May have multiple controllers: Yes 00:14:53.340 Associated with SR-IOV VF: No 00:14:53.340 Max Data Transfer Size: 131072 00:14:53.340 Max Number of Namespaces: 32 00:14:53.340 Max Number of I/O Queues: 127 00:14:53.340 NVMe Specification Version (VS): 1.3 00:14:53.340 NVMe Specification Version (Identify): 1.3 00:14:53.340 Maximum Queue Entries: 256 00:14:53.340 Contiguous Queues Required: Yes 00:14:53.340 Arbitration Mechanisms Supported 00:14:53.340 Weighted Round Robin: Not Supported 00:14:53.340 Vendor Specific: Not Supported 00:14:53.340 Reset Timeout: 15000 ms 00:14:53.340 Doorbell Stride: 4 bytes 00:14:53.340 NVM Subsystem Reset: Not Supported 00:14:53.340 Command Sets Supported 00:14:53.340 NVM Command Set: Supported 00:14:53.340 Boot Partition: Not Supported 00:14:53.340 Memory Page Size Minimum: 4096 bytes 00:14:53.340 Memory Page Size Maximum: 4096 bytes 00:14:53.340 Persistent Memory Region: Not Supported 00:14:53.340 Optional Asynchronous Events Supported 00:14:53.340 Namespace Attribute Notices: Supported 00:14:53.340 Firmware Activation Notices: Not Supported 00:14:53.340 ANA Change Notices: Not Supported 00:14:53.340 PLE Aggregate Log Change Notices: Not Supported 00:14:53.340 LBA Status Info Alert Notices: Not Supported 00:14:53.340 EGE Aggregate Log Change Notices: Not Supported 00:14:53.340 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.340 Zone Descriptor Change Notices: Not Supported 00:14:53.340 Discovery Log Change Notices: Not Supported 00:14:53.340 Controller Attributes 00:14:53.340 128-bit Host Identifier: Supported 00:14:53.340 Non-Operational Permissive Mode: Not Supported 00:14:53.340 NVM Sets: Not Supported 00:14:53.340 Read Recovery Levels: Not Supported 00:14:53.340 Endurance Groups: Not Supported 00:14:53.340 Predictable Latency Mode: Not Supported 00:14:53.340 Traffic Based Keep ALive: Not Supported 00:14:53.340 Namespace Granularity: Not Supported 00:14:53.340 SQ Associations: Not Supported 00:14:53.340 UUID List: Not Supported 00:14:53.340 Multi-Domain Subsystem: Not Supported 00:14:53.340 Fixed Capacity Management: Not Supported 00:14:53.340 Variable Capacity Management: Not Supported 00:14:53.340 Delete Endurance Group: Not Supported 00:14:53.340 Delete NVM Set: Not Supported 00:14:53.340 Extended LBA Formats Supported: Not Supported 00:14:53.340 Flexible Data Placement Supported: Not Supported 00:14:53.340 00:14:53.340 Controller Memory Buffer Support 00:14:53.340 ================================ 00:14:53.340 Supported: No 00:14:53.340 00:14:53.341 Persistent Memory Region Support 00:14:53.341 ================================ 00:14:53.341 Supported: No 00:14:53.341 00:14:53.341 Admin Command Set Attributes 00:14:53.341 ============================ 00:14:53.341 Security Send/Receive: Not Supported 00:14:53.341 Format NVM: Not Supported 00:14:53.341 Firmware Activate/Download: Not Supported 00:14:53.341 Namespace Management: Not Supported 00:14:53.341 Device Self-Test: Not Supported 00:14:53.341 Directives: Not Supported 00:14:53.341 NVMe-MI: Not Supported 00:14:53.341 Virtualization Management: Not Supported 00:14:53.341 Doorbell Buffer Config: Not Supported 00:14:53.341 Get LBA Status Capability: Not Supported 00:14:53.341 Command & Feature Lockdown Capability: Not Supported 00:14:53.341 Abort Command Limit: 4 00:14:53.341 Async Event Request Limit: 4 00:14:53.341 Number of Firmware Slots: N/A 00:14:53.341 Firmware Slot 1 Read-Only: N/A 00:14:53.341 Firmware Activation Without Reset: N/A 00:14:53.341 Multiple Update Detection Support: N/A 00:14:53.341 Firmware Update Granularity: No Information Provided 00:14:53.341 Per-Namespace SMART Log: No 00:14:53.341 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.341 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:53.341 Command Effects Log Page: Supported 00:14:53.341 Get Log Page Extended Data: Supported 00:14:53.341 Telemetry Log Pages: Not Supported 00:14:53.341 Persistent Event Log Pages: Not Supported 00:14:53.341 Supported Log Pages Log Page: May Support 00:14:53.341 Commands Supported & Effects Log Page: Not Supported 00:14:53.341 Feature Identifiers & Effects Log Page:May Support 00:14:53.341 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.341 Data Area 4 for Telemetry Log: Not Supported 00:14:53.341 Error Log Page Entries Supported: 128 00:14:53.341 Keep Alive: Supported 00:14:53.341 Keep Alive Granularity: 10000 ms 00:14:53.341 00:14:53.341 NVM Command Set Attributes 00:14:53.341 ========================== 00:14:53.341 Submission Queue Entry Size 00:14:53.341 Max: 64 00:14:53.341 Min: 64 00:14:53.341 Completion Queue Entry Size 00:14:53.341 Max: 16 00:14:53.341 Min: 16 00:14:53.341 Number of Namespaces: 32 00:14:53.341 Compare Command: Supported 00:14:53.341 Write Uncorrectable Command: Not Supported 00:14:53.341 Dataset Management Command: Supported 00:14:53.341 Write Zeroes Command: Supported 00:14:53.341 Set Features Save Field: Not Supported 00:14:53.341 Reservations: Not Supported 00:14:53.341 Timestamp: Not Supported 00:14:53.341 Copy: Supported 00:14:53.341 Volatile Write Cache: Present 00:14:53.341 Atomic Write Unit (Normal): 1 00:14:53.341 Atomic Write Unit (PFail): 1 00:14:53.341 Atomic Compare & Write Unit: 1 00:14:53.341 Fused Compare & Write: Supported 00:14:53.341 Scatter-Gather List 00:14:53.341 SGL Command Set: Supported (Dword aligned) 00:14:53.341 SGL Keyed: Not Supported 00:14:53.341 SGL Bit Bucket Descriptor: Not Supported 00:14:53.341 SGL Metadata Pointer: Not Supported 00:14:53.341 Oversized SGL: Not Supported 00:14:53.341 SGL Metadata Address: Not Supported 00:14:53.341 SGL Offset: Not Supported 00:14:53.341 Transport SGL Data Block: Not Supported 00:14:53.341 Replay Protected Memory Block: Not Supported 00:14:53.341 00:14:53.341 Firmware Slot Information 00:14:53.341 ========================= 00:14:53.341 Active slot: 1 00:14:53.341 Slot 1 Firmware Revision: 25.01 00:14:53.341 00:14:53.341 00:14:53.341 Commands Supported and Effects 00:14:53.341 ============================== 00:14:53.341 Admin Commands 00:14:53.341 -------------- 00:14:53.341 Get Log Page (02h): Supported 00:14:53.341 Identify (06h): Supported 00:14:53.341 Abort (08h): Supported 00:14:53.341 Set Features (09h): Supported 00:14:53.341 Get Features (0Ah): Supported 00:14:53.341 Asynchronous Event Request (0Ch): Supported 00:14:53.341 Keep Alive (18h): Supported 00:14:53.341 I/O Commands 00:14:53.341 ------------ 00:14:53.341 Flush (00h): Supported LBA-Change 00:14:53.341 Write (01h): Supported LBA-Change 00:14:53.341 Read (02h): Supported 00:14:53.341 Compare (05h): Supported 00:14:53.341 Write Zeroes (08h): Supported LBA-Change 00:14:53.341 Dataset Management (09h): Supported LBA-Change 00:14:53.341 Copy (19h): Supported LBA-Change 00:14:53.341 00:14:53.341 Error Log 00:14:53.341 ========= 00:14:53.341 00:14:53.341 Arbitration 00:14:53.341 =========== 00:14:53.341 Arbitration Burst: 1 00:14:53.341 00:14:53.341 Power Management 00:14:53.341 ================ 00:14:53.341 Number of Power States: 1 00:14:53.341 Current Power State: Power State #0 00:14:53.341 Power State #0: 00:14:53.341 Max Power: 0.00 W 00:14:53.341 Non-Operational State: Operational 00:14:53.341 Entry Latency: Not Reported 00:14:53.341 Exit Latency: Not Reported 00:14:53.341 Relative Read Throughput: 0 00:14:53.341 Relative Read Latency: 0 00:14:53.341 Relative Write Throughput: 0 00:14:53.341 Relative Write Latency: 0 00:14:53.341 Idle Power: Not Reported 00:14:53.341 Active Power: Not Reported 00:14:53.341 Non-Operational Permissive Mode: Not Supported 00:14:53.341 00:14:53.341 Health Information 00:14:53.341 ================== 00:14:53.341 Critical Warnings: 00:14:53.341 Available Spare Space: OK 00:14:53.341 Temperature: OK 00:14:53.341 Device Reliability: OK 00:14:53.341 Read Only: No 00:14:53.341 Volatile Memory Backup: OK 00:14:53.341 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:53.341 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:53.341 Available Spare: 0% 00:14:53.341 Available Sp[2024-11-20 09:00:18.690257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:53.341 [2024-11-20 09:00:18.698162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:53.341 [2024-11-20 09:00:18.698184] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:53.341 [2024-11-20 09:00:18.698191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.341 [2024-11-20 09:00:18.698196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.341 [2024-11-20 09:00:18.698200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.341 [2024-11-20 09:00:18.698205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.341 [2024-11-20 09:00:18.698241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:53.341 [2024-11-20 09:00:18.698249] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:53.341 [2024-11-20 09:00:18.699244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:53.341 [2024-11-20 09:00:18.699281] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:53.341 [2024-11-20 09:00:18.699285] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:53.341 [2024-11-20 09:00:18.700248] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:53.341 [2024-11-20 09:00:18.700257] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:53.341 [2024-11-20 09:00:18.700300] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:53.341 [2024-11-20 09:00:18.701270] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:53.341 are Threshold: 0% 00:14:53.341 Life Percentage Used: 0% 00:14:53.341 Data Units Read: 0 00:14:53.341 Data Units Written: 0 00:14:53.341 Host Read Commands: 0 00:14:53.341 Host Write Commands: 0 00:14:53.341 Controller Busy Time: 0 minutes 00:14:53.341 Power Cycles: 0 00:14:53.341 Power On Hours: 0 hours 00:14:53.341 Unsafe Shutdowns: 0 00:14:53.341 Unrecoverable Media Errors: 0 00:14:53.341 Lifetime Error Log Entries: 0 00:14:53.341 Warning Temperature Time: 0 minutes 00:14:53.341 Critical Temperature Time: 0 minutes 00:14:53.341 00:14:53.341 Number of Queues 00:14:53.341 ================ 00:14:53.341 Number of I/O Submission Queues: 127 00:14:53.341 Number of I/O Completion Queues: 127 00:14:53.341 00:14:53.341 Active Namespaces 00:14:53.341 ================= 00:14:53.341 Namespace ID:1 00:14:53.341 Error Recovery Timeout: Unlimited 00:14:53.341 Command Set Identifier: NVM (00h) 00:14:53.341 Deallocate: Supported 00:14:53.341 Deallocated/Unwritten Error: Not Supported 00:14:53.341 Deallocated Read Value: Unknown 00:14:53.341 Deallocate in Write Zeroes: Not Supported 00:14:53.341 Deallocated Guard Field: 0xFFFF 00:14:53.341 Flush: Supported 00:14:53.341 Reservation: Supported 00:14:53.341 Namespace Sharing Capabilities: Multiple Controllers 00:14:53.341 Size (in LBAs): 131072 (0GiB) 00:14:53.342 Capacity (in LBAs): 131072 (0GiB) 00:14:53.342 Utilization (in LBAs): 131072 (0GiB) 00:14:53.342 NGUID: 8F83ABB9E42A4B62A9602E43EEB2CF1F 00:14:53.342 UUID: 8f83abb9-e42a-4b62-a960-2e43eeb2cf1f 00:14:53.342 Thin Provisioning: Not Supported 00:14:53.342 Per-NS Atomic Units: Yes 00:14:53.342 Atomic Boundary Size (Normal): 0 00:14:53.342 Atomic Boundary Size (PFail): 0 00:14:53.342 Atomic Boundary Offset: 0 00:14:53.342 Maximum Single Source Range Length: 65535 00:14:53.342 Maximum Copy Length: 65535 00:14:53.342 Maximum Source Range Count: 1 00:14:53.342 NGUID/EUI64 Never Reused: No 00:14:53.342 Namespace Write Protected: No 00:14:53.342 Number of LBA Formats: 1 00:14:53.342 Current LBA Format: LBA Format #00 00:14:53.342 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.342 00:14:53.342 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:53.603 [2024-11-20 09:00:18.890231] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:58.886 Initializing NVMe Controllers 00:14:58.886 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:58.886 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:58.886 Initialization complete. Launching workers. 00:14:58.886 ======================================================== 00:14:58.886 Latency(us) 00:14:58.886 Device Information : IOPS MiB/s Average min max 00:14:58.886 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39964.97 156.11 3202.47 844.47 7780.22 00:14:58.886 ======================================================== 00:14:58.886 Total : 39964.97 156.11 3202.47 844.47 7780.22 00:14:58.886 00:14:58.886 [2024-11-20 09:00:23.995336] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:58.886 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:58.886 [2024-11-20 09:00:24.185939] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:04.172 Initializing NVMe Controllers 00:15:04.172 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:04.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:04.172 Initialization complete. Launching workers. 00:15:04.172 ======================================================== 00:15:04.172 Latency(us) 00:15:04.172 Device Information : IOPS MiB/s Average min max 00:15:04.172 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40027.40 156.36 3198.34 845.49 6927.99 00:15:04.172 ======================================================== 00:15:04.172 Total : 40027.40 156.36 3198.34 845.49 6927.99 00:15:04.172 00:15:04.172 [2024-11-20 09:00:29.206922] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:04.172 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:04.172 [2024-11-20 09:00:29.411114] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:09.464 [2024-11-20 09:00:34.545248] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:09.465 Initializing NVMe Controllers 00:15:09.465 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:09.465 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:09.465 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:09.465 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:09.465 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:09.465 Initialization complete. Launching workers. 00:15:09.465 Starting thread on core 2 00:15:09.465 Starting thread on core 3 00:15:09.465 Starting thread on core 1 00:15:09.465 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:09.465 [2024-11-20 09:00:34.791215] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.892 [2024-11-20 09:00:37.851196] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.892 Initializing NVMe Controllers 00:15:12.892 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:12.892 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:12.892 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:12.892 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:12.892 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:12.892 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:12.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:12.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:12.892 Initialization complete. Launching workers. 00:15:12.892 Starting thread on core 1 with urgent priority queue 00:15:12.892 Starting thread on core 2 with urgent priority queue 00:15:12.892 Starting thread on core 3 with urgent priority queue 00:15:12.892 Starting thread on core 0 with urgent priority queue 00:15:12.892 SPDK bdev Controller (SPDK2 ) core 0: 12877.00 IO/s 7.77 secs/100000 ios 00:15:12.892 SPDK bdev Controller (SPDK2 ) core 1: 10494.67 IO/s 9.53 secs/100000 ios 00:15:12.892 SPDK bdev Controller (SPDK2 ) core 2: 9998.67 IO/s 10.00 secs/100000 ios 00:15:12.892 SPDK bdev Controller (SPDK2 ) core 3: 11294.00 IO/s 8.85 secs/100000 ios 00:15:12.892 ======================================================== 00:15:12.892 00:15:12.892 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:12.892 [2024-11-20 09:00:38.087161] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.892 Initializing NVMe Controllers 00:15:12.892 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:12.892 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:12.892 Namespace ID: 1 size: 0GB 00:15:12.892 Initialization complete. 00:15:12.892 INFO: using host memory buffer for IO 00:15:12.892 Hello world! 00:15:12.892 [2024-11-20 09:00:38.099234] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.892 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:12.892 [2024-11-20 09:00:38.335290] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.279 Initializing NVMe Controllers 00:15:14.279 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.279 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.279 Initialization complete. Launching workers. 00:15:14.279 submit (in ns) avg, min, max = 5918.8, 2829.2, 3998924.2 00:15:14.279 complete (in ns) avg, min, max = 15623.3, 1632.5, 3997432.5 00:15:14.279 00:15:14.279 Submit histogram 00:15:14.279 ================ 00:15:14.279 Range in us Cumulative Count 00:15:14.279 2.827 - 2.840: 0.3535% ( 72) 00:15:14.279 2.840 - 2.853: 1.6249% ( 259) 00:15:14.279 2.853 - 2.867: 4.4674% ( 579) 00:15:14.279 2.867 - 2.880: 9.9116% ( 1109) 00:15:14.279 2.880 - 2.893: 15.4148% ( 1121) 00:15:14.279 2.893 - 2.907: 20.4026% ( 1016) 00:15:14.279 2.907 - 2.920: 25.7486% ( 1089) 00:15:14.279 2.920 - 2.933: 30.6087% ( 990) 00:15:14.279 2.933 - 2.947: 36.3721% ( 1174) 00:15:14.279 2.947 - 2.960: 41.4777% ( 1040) 00:15:14.279 2.960 - 2.973: 46.7698% ( 1078) 00:15:14.279 2.973 - 2.987: 52.3024% ( 1127) 00:15:14.279 2.987 - 3.000: 60.1031% ( 1589) 00:15:14.279 3.000 - 3.013: 69.1360% ( 1840) 00:15:14.279 3.013 - 3.027: 78.2769% ( 1862) 00:15:14.279 3.027 - 3.040: 85.8027% ( 1533) 00:15:14.279 3.040 - 3.053: 92.0717% ( 1277) 00:15:14.279 3.053 - 3.067: 95.9303% ( 786) 00:15:14.279 3.067 - 3.080: 97.8007% ( 381) 00:15:14.279 3.080 - 3.093: 98.8365% ( 211) 00:15:14.279 3.093 - 3.107: 99.3422% ( 103) 00:15:14.279 3.107 - 3.120: 99.5091% ( 34) 00:15:14.279 3.120 - 3.133: 99.5582% ( 10) 00:15:14.279 3.133 - 3.147: 99.5680% ( 2) 00:15:14.279 3.147 - 3.160: 99.5778% ( 2) 00:15:14.279 3.160 - 3.173: 99.5974% ( 4) 00:15:14.279 3.173 - 3.187: 99.6073% ( 2) 00:15:14.279 3.187 - 3.200: 99.6122% ( 1) 00:15:14.279 3.213 - 3.227: 99.6171% ( 1) 00:15:14.279 3.253 - 3.267: 99.6220% ( 1) 00:15:14.279 3.280 - 3.293: 99.6269% ( 1) 00:15:14.279 3.307 - 3.320: 99.6318% ( 1) 00:15:14.279 3.413 - 3.440: 99.6367% ( 1) 00:15:14.279 3.493 - 3.520: 99.6465% ( 2) 00:15:14.279 3.627 - 3.653: 99.6514% ( 1) 00:15:14.279 3.787 - 3.813: 99.6564% ( 1) 00:15:14.279 3.840 - 3.867: 99.6613% ( 1) 00:15:14.279 3.867 - 3.893: 99.6662% ( 1) 00:15:14.279 3.920 - 3.947: 99.6711% ( 1) 00:15:14.279 3.947 - 3.973: 99.6760% ( 1) 00:15:14.279 4.053 - 4.080: 99.6907% ( 3) 00:15:14.279 4.320 - 4.347: 99.6956% ( 1) 00:15:14.279 4.507 - 4.533: 99.7005% ( 1) 00:15:14.279 4.587 - 4.613: 99.7054% ( 1) 00:15:14.279 4.613 - 4.640: 99.7104% ( 1) 00:15:14.279 4.640 - 4.667: 99.7153% ( 1) 00:15:14.279 4.667 - 4.693: 99.7202% ( 1) 00:15:14.279 4.747 - 4.773: 99.7300% ( 2) 00:15:14.279 4.773 - 4.800: 99.7349% ( 1) 00:15:14.279 4.827 - 4.853: 99.7398% ( 1) 00:15:14.279 4.853 - 4.880: 99.7496% ( 2) 00:15:14.279 4.907 - 4.933: 99.7545% ( 1) 00:15:14.279 4.960 - 4.987: 99.7644% ( 2) 00:15:14.279 4.987 - 5.013: 99.7693% ( 1) 00:15:14.279 5.067 - 5.093: 99.7742% ( 1) 00:15:14.279 5.120 - 5.147: 99.7840% ( 2) 00:15:14.279 5.227 - 5.253: 99.7889% ( 1) 00:15:14.279 5.253 - 5.280: 99.7938% ( 1) 00:15:14.279 5.280 - 5.307: 99.7987% ( 1) 00:15:14.279 5.387 - 5.413: 99.8135% ( 3) 00:15:14.279 5.573 - 5.600: 99.8233% ( 2) 00:15:14.279 5.707 - 5.733: 99.8282% ( 1) 00:15:14.279 5.733 - 5.760: 99.8331% ( 1) 00:15:14.279 5.760 - 5.787: 99.8380% ( 1) 00:15:14.279 5.787 - 5.813: 99.8429% ( 1) 00:15:14.279 5.867 - 5.893: 99.8527% ( 2) 00:15:14.279 5.920 - 5.947: 99.8625% ( 2) 00:15:14.279 6.027 - 6.053: 99.8675% ( 1) 00:15:14.279 6.080 - 6.107: 99.8724% ( 1) 00:15:14.279 6.107 - 6.133: 99.8773% ( 1) 00:15:14.279 6.187 - 6.213: 99.8822% ( 1) 00:15:14.279 6.240 - 6.267: 99.8920% ( 2) 00:15:14.279 6.267 - 6.293: 99.8969% ( 1) 00:15:14.279 6.427 - 6.453: 99.9018% ( 1) 00:15:14.279 6.453 - 6.480: 99.9067% ( 1) 00:15:14.279 6.480 - 6.507: 99.9116% ( 1) 00:15:14.279 6.587 - 6.613: 99.9165% ( 1) 00:15:14.279 6.880 - 6.933: 99.9215% ( 1) 00:15:14.279 13.653 - 13.760: 99.9264% ( 1) 00:15:14.279 [2024-11-20 09:00:39.430710] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.279 3986.773 - 4014.080: 100.0000% ( 15) 00:15:14.279 00:15:14.279 Complete histogram 00:15:14.279 ================== 00:15:14.279 Range in us Cumulative Count 00:15:14.279 1.627 - 1.633: 0.0049% ( 1) 00:15:14.279 1.633 - 1.640: 0.0098% ( 1) 00:15:14.279 1.640 - 1.647: 0.6529% ( 131) 00:15:14.279 1.647 - 1.653: 0.7609% ( 22) 00:15:14.279 1.653 - 1.660: 0.8198% ( 12) 00:15:14.279 1.660 - 1.667: 0.8886% ( 14) 00:15:14.279 1.667 - 1.673: 1.1537% ( 54) 00:15:14.279 1.673 - 1.680: 51.3844% ( 10232) 00:15:14.279 1.680 - 1.687: 63.6868% ( 2506) 00:15:14.279 1.687 - 1.693: 71.3451% ( 1560) 00:15:14.279 1.693 - 1.700: 82.8621% ( 2346) 00:15:14.279 1.700 - 1.707: 86.9661% ( 836) 00:15:14.279 1.707 - 1.720: 92.8424% ( 1197) 00:15:14.279 1.720 - 1.733: 93.9028% ( 216) 00:15:14.279 1.733 - 1.747: 95.3510% ( 295) 00:15:14.279 1.747 - 1.760: 97.4767% ( 433) 00:15:14.279 1.760 - 1.773: 98.6402% ( 237) 00:15:14.279 1.773 - 1.787: 99.2342% ( 121) 00:15:14.279 1.787 - 1.800: 99.4060% ( 35) 00:15:14.279 1.800 - 1.813: 99.4354% ( 6) 00:15:14.279 1.813 - 1.827: 99.4453% ( 2) 00:15:14.279 1.840 - 1.853: 99.4551% ( 2) 00:15:14.279 1.853 - 1.867: 99.4649% ( 2) 00:15:14.279 1.867 - 1.880: 99.4698% ( 1) 00:15:14.279 1.880 - 1.893: 99.4747% ( 1) 00:15:14.279 1.907 - 1.920: 99.4796% ( 1) 00:15:14.279 3.280 - 3.293: 99.4845% ( 1) 00:15:14.279 3.347 - 3.360: 99.4894% ( 1) 00:15:14.279 3.400 - 3.413: 99.4944% ( 1) 00:15:14.279 3.467 - 3.493: 99.5042% ( 2) 00:15:14.279 3.547 - 3.573: 99.5091% ( 1) 00:15:14.279 3.760 - 3.787: 99.5140% ( 1) 00:15:14.279 3.840 - 3.867: 99.5189% ( 1) 00:15:14.279 3.947 - 3.973: 99.5287% ( 2) 00:15:14.279 3.973 - 4.000: 99.5385% ( 2) 00:15:14.279 4.053 - 4.080: 99.5434% ( 1) 00:15:14.279 4.293 - 4.320: 99.5484% ( 1) 00:15:14.279 4.347 - 4.373: 99.5582% ( 2) 00:15:14.279 4.400 - 4.427: 99.5729% ( 3) 00:15:14.279 4.453 - 4.480: 99.5778% ( 1) 00:15:14.279 4.480 - 4.507: 99.5876% ( 2) 00:15:14.279 4.507 - 4.533: 99.5925% ( 1) 00:15:14.279 4.533 - 4.560: 99.5974% ( 1) 00:15:14.279 4.587 - 4.613: 99.6024% ( 1) 00:15:14.279 4.693 - 4.720: 99.6073% ( 1) 00:15:14.279 5.040 - 5.067: 99.6122% ( 1) 00:15:14.279 5.147 - 5.173: 99.6171% ( 1) 00:15:14.279 5.253 - 5.280: 99.6220% ( 1) 00:15:14.279 9.333 - 9.387: 99.6269% ( 1) 00:15:14.279 10.773 - 10.827: 99.6318% ( 1) 00:15:14.279 11.040 - 11.093: 99.6367% ( 1) 00:15:14.279 34.133 - 34.347: 99.6416% ( 1) 00:15:14.279 43.520 - 43.733: 99.6465% ( 1) 00:15:14.279 91.307 - 91.733: 99.6514% ( 1) 00:15:14.279 3986.773 - 4014.080: 100.0000% ( 71) 00:15:14.279 00:15:14.279 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:14.279 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:14.279 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:14.279 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:14.279 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:14.279 [ 00:15:14.279 { 00:15:14.279 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:14.279 "subtype": "Discovery", 00:15:14.279 "listen_addresses": [], 00:15:14.280 "allow_any_host": true, 00:15:14.280 "hosts": [] 00:15:14.280 }, 00:15:14.280 { 00:15:14.280 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:14.280 "subtype": "NVMe", 00:15:14.280 "listen_addresses": [ 00:15:14.280 { 00:15:14.280 "trtype": "VFIOUSER", 00:15:14.280 "adrfam": "IPv4", 00:15:14.280 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:14.280 "trsvcid": "0" 00:15:14.280 } 00:15:14.280 ], 00:15:14.280 "allow_any_host": true, 00:15:14.280 "hosts": [], 00:15:14.280 "serial_number": "SPDK1", 00:15:14.280 "model_number": "SPDK bdev Controller", 00:15:14.280 "max_namespaces": 32, 00:15:14.280 "min_cntlid": 1, 00:15:14.280 "max_cntlid": 65519, 00:15:14.280 "namespaces": [ 00:15:14.280 { 00:15:14.280 "nsid": 1, 00:15:14.280 "bdev_name": "Malloc1", 00:15:14.280 "name": "Malloc1", 00:15:14.280 "nguid": "C90B3CB87AB949068C882C1ED8942939", 00:15:14.280 "uuid": "c90b3cb8-7ab9-4906-8c88-2c1ed8942939" 00:15:14.280 }, 00:15:14.280 { 00:15:14.280 "nsid": 2, 00:15:14.280 "bdev_name": "Malloc3", 00:15:14.280 "name": "Malloc3", 00:15:14.280 "nguid": "270B2A22A47A47E9A1E2FF0D3D8CD441", 00:15:14.280 "uuid": "270b2a22-a47a-47e9-a1e2-ff0d3d8cd441" 00:15:14.280 } 00:15:14.280 ] 00:15:14.280 }, 00:15:14.280 { 00:15:14.280 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:14.280 "subtype": "NVMe", 00:15:14.280 "listen_addresses": [ 00:15:14.280 { 00:15:14.280 "trtype": "VFIOUSER", 00:15:14.280 "adrfam": "IPv4", 00:15:14.280 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:14.280 "trsvcid": "0" 00:15:14.280 } 00:15:14.280 ], 00:15:14.280 "allow_any_host": true, 00:15:14.280 "hosts": [], 00:15:14.280 "serial_number": "SPDK2", 00:15:14.280 "model_number": "SPDK bdev Controller", 00:15:14.280 "max_namespaces": 32, 00:15:14.280 "min_cntlid": 1, 00:15:14.280 "max_cntlid": 65519, 00:15:14.280 "namespaces": [ 00:15:14.280 { 00:15:14.280 "nsid": 1, 00:15:14.280 "bdev_name": "Malloc2", 00:15:14.280 "name": "Malloc2", 00:15:14.280 "nguid": "8F83ABB9E42A4B62A9602E43EEB2CF1F", 00:15:14.280 "uuid": "8f83abb9-e42a-4b62-a960-2e43eeb2cf1f" 00:15:14.280 } 00:15:14.280 ] 00:15:14.280 } 00:15:14.280 ] 00:15:14.280 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:14.280 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=651161 00:15:14.280 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:14.280 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:14.280 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:14.280 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:14.280 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:14.280 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:14.280 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:14.280 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:14.541 [2024-11-20 09:00:39.806688] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.541 Malloc4 00:15:14.541 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:14.541 [2024-11-20 09:00:39.994909] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.541 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:14.541 Asynchronous Event Request test 00:15:14.541 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.541 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.541 Registering asynchronous event callbacks... 00:15:14.541 Starting namespace attribute notice tests for all controllers... 00:15:14.541 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:14.541 aer_cb - Changed Namespace 00:15:14.541 Cleaning up... 00:15:14.803 [ 00:15:14.803 { 00:15:14.803 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:14.803 "subtype": "Discovery", 00:15:14.803 "listen_addresses": [], 00:15:14.803 "allow_any_host": true, 00:15:14.803 "hosts": [] 00:15:14.803 }, 00:15:14.803 { 00:15:14.803 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:14.803 "subtype": "NVMe", 00:15:14.803 "listen_addresses": [ 00:15:14.803 { 00:15:14.803 "trtype": "VFIOUSER", 00:15:14.803 "adrfam": "IPv4", 00:15:14.803 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:14.803 "trsvcid": "0" 00:15:14.803 } 00:15:14.803 ], 00:15:14.803 "allow_any_host": true, 00:15:14.803 "hosts": [], 00:15:14.803 "serial_number": "SPDK1", 00:15:14.803 "model_number": "SPDK bdev Controller", 00:15:14.803 "max_namespaces": 32, 00:15:14.803 "min_cntlid": 1, 00:15:14.803 "max_cntlid": 65519, 00:15:14.803 "namespaces": [ 00:15:14.803 { 00:15:14.803 "nsid": 1, 00:15:14.803 "bdev_name": "Malloc1", 00:15:14.803 "name": "Malloc1", 00:15:14.803 "nguid": "C90B3CB87AB949068C882C1ED8942939", 00:15:14.803 "uuid": "c90b3cb8-7ab9-4906-8c88-2c1ed8942939" 00:15:14.803 }, 00:15:14.803 { 00:15:14.803 "nsid": 2, 00:15:14.803 "bdev_name": "Malloc3", 00:15:14.803 "name": "Malloc3", 00:15:14.803 "nguid": "270B2A22A47A47E9A1E2FF0D3D8CD441", 00:15:14.803 "uuid": "270b2a22-a47a-47e9-a1e2-ff0d3d8cd441" 00:15:14.803 } 00:15:14.803 ] 00:15:14.803 }, 00:15:14.803 { 00:15:14.803 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:14.803 "subtype": "NVMe", 00:15:14.803 "listen_addresses": [ 00:15:14.803 { 00:15:14.803 "trtype": "VFIOUSER", 00:15:14.803 "adrfam": "IPv4", 00:15:14.803 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:14.803 "trsvcid": "0" 00:15:14.803 } 00:15:14.803 ], 00:15:14.803 "allow_any_host": true, 00:15:14.803 "hosts": [], 00:15:14.803 "serial_number": "SPDK2", 00:15:14.803 "model_number": "SPDK bdev Controller", 00:15:14.803 "max_namespaces": 32, 00:15:14.803 "min_cntlid": 1, 00:15:14.803 "max_cntlid": 65519, 00:15:14.803 "namespaces": [ 00:15:14.803 { 00:15:14.803 "nsid": 1, 00:15:14.803 "bdev_name": "Malloc2", 00:15:14.803 "name": "Malloc2", 00:15:14.803 "nguid": "8F83ABB9E42A4B62A9602E43EEB2CF1F", 00:15:14.803 "uuid": "8f83abb9-e42a-4b62-a960-2e43eeb2cf1f" 00:15:14.803 }, 00:15:14.803 { 00:15:14.803 "nsid": 2, 00:15:14.803 "bdev_name": "Malloc4", 00:15:14.803 "name": "Malloc4", 00:15:14.803 "nguid": "6D9F0608275B4A9386FD971909C8E799", 00:15:14.803 "uuid": "6d9f0608-275b-4a93-86fd-971909c8e799" 00:15:14.803 } 00:15:14.803 ] 00:15:14.803 } 00:15:14.803 ] 00:15:14.803 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 651161 00:15:14.803 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:14.803 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 641524 00:15:14.803 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 641524 ']' 00:15:14.804 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 641524 00:15:14.804 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:14.804 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.804 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 641524 00:15:14.804 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.804 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.804 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 641524' 00:15:14.804 killing process with pid 641524 00:15:14.804 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 641524 00:15:14.804 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 641524 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=651364 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 651364' 00:15:15.065 Process pid: 651364 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 651364 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 651364 ']' 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.065 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:15.065 [2024-11-20 09:00:40.463086] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:15.065 [2024-11-20 09:00:40.464039] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:15:15.065 [2024-11-20 09:00:40.464085] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.065 [2024-11-20 09:00:40.548549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.065 [2024-11-20 09:00:40.583141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.065 [2024-11-20 09:00:40.583181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.065 [2024-11-20 09:00:40.583187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.065 [2024-11-20 09:00:40.583192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.065 [2024-11-20 09:00:40.583196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.065 [2024-11-20 09:00:40.584517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.065 [2024-11-20 09:00:40.584669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.065 [2024-11-20 09:00:40.584822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.065 [2024-11-20 09:00:40.584825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.326 [2024-11-20 09:00:40.637955] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:15.326 [2024-11-20 09:00:40.638820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:15.326 [2024-11-20 09:00:40.639890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:15.326 [2024-11-20 09:00:40.640067] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:15.326 [2024-11-20 09:00:40.640116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:15.898 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.898 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:15.898 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:16.841 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:17.100 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:17.100 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:17.100 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:17.100 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:17.100 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:17.360 Malloc1 00:15:17.360 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:17.360 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:17.621 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:17.881 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:17.881 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:17.881 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:18.143 Malloc2 00:15:18.143 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:18.143 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:18.403 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:18.664 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:18.664 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 651364 00:15:18.664 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 651364 ']' 00:15:18.664 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 651364 00:15:18.664 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:18.664 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.664 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 651364 00:15:18.664 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.664 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.664 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 651364' 00:15:18.664 killing process with pid 651364 00:15:18.664 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 651364 00:15:18.664 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 651364 00:15:18.664 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:18.664 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:18.664 00:15:18.664 real 0m50.946s 00:15:18.664 user 3m15.238s 00:15:18.664 sys 0m2.684s 00:15:18.664 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.664 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:18.664 ************************************ 00:15:18.664 END TEST nvmf_vfio_user 00:15:18.664 ************************************ 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:18.926 ************************************ 00:15:18.926 START TEST nvmf_vfio_user_nvme_compliance 00:15:18.926 ************************************ 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:18.926 * Looking for test storage... 00:15:18.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.926 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:18.927 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:19.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.188 --rc genhtml_branch_coverage=1 00:15:19.188 --rc genhtml_function_coverage=1 00:15:19.188 --rc genhtml_legend=1 00:15:19.188 --rc geninfo_all_blocks=1 00:15:19.188 --rc geninfo_unexecuted_blocks=1 00:15:19.188 00:15:19.188 ' 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:19.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.188 --rc genhtml_branch_coverage=1 00:15:19.188 --rc genhtml_function_coverage=1 00:15:19.188 --rc genhtml_legend=1 00:15:19.188 --rc geninfo_all_blocks=1 00:15:19.188 --rc geninfo_unexecuted_blocks=1 00:15:19.188 00:15:19.188 ' 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:19.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.188 --rc genhtml_branch_coverage=1 00:15:19.188 --rc genhtml_function_coverage=1 00:15:19.188 --rc genhtml_legend=1 00:15:19.188 --rc geninfo_all_blocks=1 00:15:19.188 --rc geninfo_unexecuted_blocks=1 00:15:19.188 00:15:19.188 ' 00:15:19.188 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:19.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.188 --rc genhtml_branch_coverage=1 00:15:19.189 --rc genhtml_function_coverage=1 00:15:19.189 --rc genhtml_legend=1 00:15:19.189 --rc geninfo_all_blocks=1 00:15:19.189 --rc geninfo_unexecuted_blocks=1 00:15:19.189 00:15:19.189 ' 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=652252 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 652252' 00:15:19.189 Process pid: 652252 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 652252 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 652252 ']' 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.189 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:19.189 [2024-11-20 09:00:44.561308] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:15:19.189 [2024-11-20 09:00:44.561359] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.189 [2024-11-20 09:00:44.643907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:19.189 [2024-11-20 09:00:44.674447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.189 [2024-11-20 09:00:44.674480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.189 [2024-11-20 09:00:44.674485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.189 [2024-11-20 09:00:44.674490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.189 [2024-11-20 09:00:44.674495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.189 [2024-11-20 09:00:44.675640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.189 [2024-11-20 09:00:44.675781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.189 [2024-11-20 09:00:44.675784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.132 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.132 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:20.132 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:21.077 malloc0 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.077 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:21.077 00:15:21.077 00:15:21.077 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.077 http://cunit.sourceforge.net/ 00:15:21.077 00:15:21.077 00:15:21.077 Suite: nvme_compliance 00:15:21.077 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 09:00:46.589548] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.077 [2024-11-20 09:00:46.590829] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:21.077 [2024-11-20 09:00:46.590841] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:21.077 [2024-11-20 09:00:46.590846] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:21.077 [2024-11-20 09:00:46.592571] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.339 passed 00:15:21.339 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 09:00:46.668077] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.339 [2024-11-20 09:00:46.671098] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.339 passed 00:15:21.339 Test: admin_identify_ns ...[2024-11-20 09:00:46.750525] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.339 [2024-11-20 09:00:46.811167] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:21.339 [2024-11-20 09:00:46.819164] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:21.339 [2024-11-20 09:00:46.840254] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.600 passed 00:15:21.600 Test: admin_get_features_mandatory_features ...[2024-11-20 09:00:46.914482] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.600 [2024-11-20 09:00:46.917505] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.600 passed 00:15:21.600 Test: admin_get_features_optional_features ...[2024-11-20 09:00:46.993946] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.600 [2024-11-20 09:00:46.996966] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.600 passed 00:15:21.600 Test: admin_set_features_number_of_queues ...[2024-11-20 09:00:47.072701] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.861 [2024-11-20 09:00:47.178251] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.861 passed 00:15:21.861 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 09:00:47.251505] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.861 [2024-11-20 09:00:47.254525] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.861 passed 00:15:21.861 Test: admin_get_log_page_with_lpo ...[2024-11-20 09:00:47.331276] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.122 [2024-11-20 09:00:47.401169] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:22.122 [2024-11-20 09:00:47.414206] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.122 passed 00:15:22.122 Test: fabric_property_get ...[2024-11-20 09:00:47.487438] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.122 [2024-11-20 09:00:47.488637] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:22.122 [2024-11-20 09:00:47.490462] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.122 passed 00:15:22.122 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 09:00:47.566925] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.122 [2024-11-20 09:00:47.568121] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:22.122 [2024-11-20 09:00:47.569945] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.122 passed 00:15:22.122 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 09:00:47.645829] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.382 [2024-11-20 09:00:47.730166] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:22.382 [2024-11-20 09:00:47.746165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:22.382 [2024-11-20 09:00:47.751237] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.382 passed 00:15:22.382 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 09:00:47.824462] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.382 [2024-11-20 09:00:47.825662] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:22.383 [2024-11-20 09:00:47.827476] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.383 passed 00:15:22.383 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 09:00:47.902521] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.643 [2024-11-20 09:00:47.982169] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:22.643 [2024-11-20 09:00:48.006166] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:22.643 [2024-11-20 09:00:48.011232] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.643 passed 00:15:22.643 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 09:00:48.083465] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.643 [2024-11-20 09:00:48.084665] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:22.643 [2024-11-20 09:00:48.084684] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:22.643 [2024-11-20 09:00:48.087487] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.643 passed 00:15:22.643 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 09:00:48.162507] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.903 [2024-11-20 09:00:48.258164] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:22.903 [2024-11-20 09:00:48.266165] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:22.903 [2024-11-20 09:00:48.274165] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:22.903 [2024-11-20 09:00:48.282167] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:22.903 [2024-11-20 09:00:48.311230] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.903 passed 00:15:22.903 Test: admin_create_io_sq_verify_pc ...[2024-11-20 09:00:48.382429] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.903 [2024-11-20 09:00:48.401170] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:22.903 [2024-11-20 09:00:48.418591] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.164 passed 00:15:23.164 Test: admin_create_io_qp_max_qps ...[2024-11-20 09:00:48.494063] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.109 [2024-11-20 09:00:49.583166] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:24.681 [2024-11-20 09:00:49.967287] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.681 passed 00:15:24.681 Test: admin_create_io_sq_shared_cq ...[2024-11-20 09:00:50.042037] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.681 [2024-11-20 09:00:50.173163] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:24.943 [2024-11-20 09:00:50.210212] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.943 passed 00:15:24.943 00:15:24.943 Run Summary: Type Total Ran Passed Failed Inactive 00:15:24.943 suites 1 1 n/a 0 0 00:15:24.943 tests 18 18 18 0 0 00:15:24.943 asserts 360 360 360 0 n/a 00:15:24.943 00:15:24.943 Elapsed time = 1.486 seconds 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 652252 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 652252 ']' 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 652252 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 652252 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 652252' 00:15:24.943 killing process with pid 652252 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 652252 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 652252 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:24.943 00:15:24.943 real 0m6.174s 00:15:24.943 user 0m17.502s 00:15:24.943 sys 0m0.517s 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.943 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:24.943 ************************************ 00:15:24.943 END TEST nvmf_vfio_user_nvme_compliance 00:15:24.943 ************************************ 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:25.205 ************************************ 00:15:25.205 START TEST nvmf_vfio_user_fuzz 00:15:25.205 ************************************ 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:25.205 * Looking for test storage... 00:15:25.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.205 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:25.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.205 --rc genhtml_branch_coverage=1 00:15:25.205 --rc genhtml_function_coverage=1 00:15:25.205 --rc genhtml_legend=1 00:15:25.205 --rc geninfo_all_blocks=1 00:15:25.205 --rc geninfo_unexecuted_blocks=1 00:15:25.205 00:15:25.205 ' 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:25.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.206 --rc genhtml_branch_coverage=1 00:15:25.206 --rc genhtml_function_coverage=1 00:15:25.206 --rc genhtml_legend=1 00:15:25.206 --rc geninfo_all_blocks=1 00:15:25.206 --rc geninfo_unexecuted_blocks=1 00:15:25.206 00:15:25.206 ' 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:25.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.206 --rc genhtml_branch_coverage=1 00:15:25.206 --rc genhtml_function_coverage=1 00:15:25.206 --rc genhtml_legend=1 00:15:25.206 --rc geninfo_all_blocks=1 00:15:25.206 --rc geninfo_unexecuted_blocks=1 00:15:25.206 00:15:25.206 ' 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:25.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.206 --rc genhtml_branch_coverage=1 00:15:25.206 --rc genhtml_function_coverage=1 00:15:25.206 --rc genhtml_legend=1 00:15:25.206 --rc geninfo_all_blocks=1 00:15:25.206 --rc geninfo_unexecuted_blocks=1 00:15:25.206 00:15:25.206 ' 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.206 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:25.467 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:25.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=653496 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 653496' 00:15:25.468 Process pid: 653496 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 653496 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 653496 ']' 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.468 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:26.409 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.409 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:26.409 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:27.353 malloc0 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:27.353 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:59.463 Fuzzing completed. Shutting down the fuzz application 00:15:59.463 00:15:59.463 Dumping successful admin opcodes: 00:15:59.463 8, 9, 10, 24, 00:15:59.463 Dumping successful io opcodes: 00:15:59.463 0, 00:15:59.463 NS: 0x20000081ef00 I/O qp, Total commands completed: 1235872, total successful commands: 4850, random_seed: 1036159552 00:15:59.463 NS: 0x20000081ef00 admin qp, Total commands completed: 259940, total successful commands: 2093, random_seed: 3638216000 00:15:59.463 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:59.463 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.463 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:59.463 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.463 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 653496 00:15:59.464 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 653496 ']' 00:15:59.464 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 653496 00:15:59.464 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 653496 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 653496' 00:15:59.464 killing process with pid 653496 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 653496 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 653496 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:59.464 00:15:59.464 real 0m32.774s 00:15:59.464 user 0m34.951s 00:15:59.464 sys 0m26.023s 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:59.464 ************************************ 00:15:59.464 END TEST nvmf_vfio_user_fuzz 00:15:59.464 ************************************ 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:59.464 ************************************ 00:15:59.464 START TEST nvmf_auth_target 00:15:59.464 ************************************ 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:59.464 * Looking for test storage... 00:15:59.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:59.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.464 --rc genhtml_branch_coverage=1 00:15:59.464 --rc genhtml_function_coverage=1 00:15:59.464 --rc genhtml_legend=1 00:15:59.464 --rc geninfo_all_blocks=1 00:15:59.464 --rc geninfo_unexecuted_blocks=1 00:15:59.464 00:15:59.464 ' 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:59.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.464 --rc genhtml_branch_coverage=1 00:15:59.464 --rc genhtml_function_coverage=1 00:15:59.464 --rc genhtml_legend=1 00:15:59.464 --rc geninfo_all_blocks=1 00:15:59.464 --rc geninfo_unexecuted_blocks=1 00:15:59.464 00:15:59.464 ' 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:59.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.464 --rc genhtml_branch_coverage=1 00:15:59.464 --rc genhtml_function_coverage=1 00:15:59.464 --rc genhtml_legend=1 00:15:59.464 --rc geninfo_all_blocks=1 00:15:59.464 --rc geninfo_unexecuted_blocks=1 00:15:59.464 00:15:59.464 ' 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:59.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.464 --rc genhtml_branch_coverage=1 00:15:59.464 --rc genhtml_function_coverage=1 00:15:59.464 --rc genhtml_legend=1 00:15:59.464 --rc geninfo_all_blocks=1 00:15:59.464 --rc geninfo_unexecuted_blocks=1 00:15:59.464 00:15:59.464 ' 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.464 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:59.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:59.465 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:06.053 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:06.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:06.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.053 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:06.054 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:06.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:16:06.054 00:16:06.054 --- 10.0.0.2 ping statistics --- 00:16:06.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.054 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:16:06.054 00:16:06.054 --- 10.0.0.1 ping statistics --- 00:16:06.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.054 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:06.054 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.054 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=663568 00:16:06.054 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 663568 00:16:06.054 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:06.054 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 663568 ']' 00:16:06.054 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.054 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.054 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.054 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.054 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=663663 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2a72d195f4447df2f8ff4adc91de6e13ff93f57ed18f3855 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Elk 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2a72d195f4447df2f8ff4adc91de6e13ff93f57ed18f3855 0 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2a72d195f4447df2f8ff4adc91de6e13ff93f57ed18f3855 0 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2a72d195f4447df2f8ff4adc91de6e13ff93f57ed18f3855 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Elk 00:16:06.627 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Elk 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Elk 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7bf74dcf7da916c7c25de07bb51cae7e76119b7bccff7353c4e877d17c45710d 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hAo 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7bf74dcf7da916c7c25de07bb51cae7e76119b7bccff7353c4e877d17c45710d 3 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7bf74dcf7da916c7c25de07bb51cae7e76119b7bccff7353c4e877d17c45710d 3 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7bf74dcf7da916c7c25de07bb51cae7e76119b7bccff7353c4e877d17c45710d 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hAo 00:16:06.627 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hAo 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.hAo 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fe3c0e9a292c16a0870f46beb354b9e3 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RWX 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fe3c0e9a292c16a0870f46beb354b9e3 1 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fe3c0e9a292c16a0870f46beb354b9e3 1 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fe3c0e9a292c16a0870f46beb354b9e3 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RWX 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RWX 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.RWX 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:06.628 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7648453d132e576d7127133014948b3ca17a57dbe601c149 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.V7R 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7648453d132e576d7127133014948b3ca17a57dbe601c149 2 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7648453d132e576d7127133014948b3ca17a57dbe601c149 2 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7648453d132e576d7127133014948b3ca17a57dbe601c149 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.V7R 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.V7R 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.V7R 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0ea060016f483b63f907abce8e2fe3640625fc1c3788af6b 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ofo 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0ea060016f483b63f907abce8e2fe3640625fc1c3788af6b 2 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0ea060016f483b63f907abce8e2fe3640625fc1c3788af6b 2 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0ea060016f483b63f907abce8e2fe3640625fc1c3788af6b 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ofo 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ofo 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ofo 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a8b7c4d08c900194acc8a8e4a85d8f94 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1vB 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a8b7c4d08c900194acc8a8e4a85d8f94 1 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a8b7c4d08c900194acc8a8e4a85d8f94 1 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a8b7c4d08c900194acc8a8e4a85d8f94 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1vB 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1vB 00:16:06.889 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.1vB 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9d4a3ead13a1c95940a8144e688770bf5cf0dbe075f09696ad6356d6c424d9af 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SoB 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9d4a3ead13a1c95940a8144e688770bf5cf0dbe075f09696ad6356d6c424d9af 3 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9d4a3ead13a1c95940a8144e688770bf5cf0dbe075f09696ad6356d6c424d9af 3 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9d4a3ead13a1c95940a8144e688770bf5cf0dbe075f09696ad6356d6c424d9af 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:06.890 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:07.150 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SoB 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SoB 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.SoB 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 663568 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 663568 ']' 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 663663 /var/tmp/host.sock 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 663663 ']' 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:07.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.151 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Elk 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Elk 00:16:07.413 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Elk 00:16:07.674 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.hAo ]] 00:16:07.674 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hAo 00:16:07.674 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.674 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.674 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.674 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hAo 00:16:07.674 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hAo 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RWX 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.RWX 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.RWX 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.V7R ]] 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.V7R 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.V7R 00:16:07.935 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.V7R 00:16:08.195 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:08.195 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ofo 00:16:08.195 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.195 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.195 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.195 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ofo 00:16:08.195 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ofo 00:16:08.456 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.1vB ]] 00:16:08.456 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1vB 00:16:08.456 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.456 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.456 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.456 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1vB 00:16:08.456 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1vB 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SoB 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.SoB 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.SoB 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:08.717 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:08.978 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.979 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.239 00:16:09.239 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.239 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.240 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.500 { 00:16:09.500 "cntlid": 1, 00:16:09.500 "qid": 0, 00:16:09.500 "state": "enabled", 00:16:09.500 "thread": "nvmf_tgt_poll_group_000", 00:16:09.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:09.500 "listen_address": { 00:16:09.500 "trtype": "TCP", 00:16:09.500 "adrfam": "IPv4", 00:16:09.500 "traddr": "10.0.0.2", 00:16:09.500 "trsvcid": "4420" 00:16:09.500 }, 00:16:09.500 "peer_address": { 00:16:09.500 "trtype": "TCP", 00:16:09.500 "adrfam": "IPv4", 00:16:09.500 "traddr": "10.0.0.1", 00:16:09.500 "trsvcid": "44276" 00:16:09.500 }, 00:16:09.500 "auth": { 00:16:09.500 "state": "completed", 00:16:09.500 "digest": "sha256", 00:16:09.500 "dhgroup": "null" 00:16:09.500 } 00:16:09.500 } 00:16:09.500 ]' 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.500 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.501 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.761 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:09.761 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:10.331 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.331 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:10.331 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.331 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.331 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.331 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.331 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.331 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.591 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.852 00:16:10.852 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.852 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.852 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.113 { 00:16:11.113 "cntlid": 3, 00:16:11.113 "qid": 0, 00:16:11.113 "state": "enabled", 00:16:11.113 "thread": "nvmf_tgt_poll_group_000", 00:16:11.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:11.113 "listen_address": { 00:16:11.113 "trtype": "TCP", 00:16:11.113 "adrfam": "IPv4", 00:16:11.113 "traddr": "10.0.0.2", 00:16:11.113 "trsvcid": "4420" 00:16:11.113 }, 00:16:11.113 "peer_address": { 00:16:11.113 "trtype": "TCP", 00:16:11.113 "adrfam": "IPv4", 00:16:11.113 "traddr": "10.0.0.1", 00:16:11.113 "trsvcid": "57728" 00:16:11.113 }, 00:16:11.113 "auth": { 00:16:11.113 "state": "completed", 00:16:11.113 "digest": "sha256", 00:16:11.113 "dhgroup": "null" 00:16:11.113 } 00:16:11.113 } 00:16:11.113 ]' 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.113 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.374 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:11.374 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:11.944 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.944 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:11.944 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.944 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.944 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.944 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.944 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.944 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.205 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.465 00:16:12.465 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.466 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.466 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.728 { 00:16:12.728 "cntlid": 5, 00:16:12.728 "qid": 0, 00:16:12.728 "state": "enabled", 00:16:12.728 "thread": "nvmf_tgt_poll_group_000", 00:16:12.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:12.728 "listen_address": { 00:16:12.728 "trtype": "TCP", 00:16:12.728 "adrfam": "IPv4", 00:16:12.728 "traddr": "10.0.0.2", 00:16:12.728 "trsvcid": "4420" 00:16:12.728 }, 00:16:12.728 "peer_address": { 00:16:12.728 "trtype": "TCP", 00:16:12.728 "adrfam": "IPv4", 00:16:12.728 "traddr": "10.0.0.1", 00:16:12.728 "trsvcid": "57758" 00:16:12.728 }, 00:16:12.728 "auth": { 00:16:12.728 "state": "completed", 00:16:12.728 "digest": "sha256", 00:16:12.728 "dhgroup": "null" 00:16:12.728 } 00:16:12.728 } 00:16:12.728 ]' 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.728 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.988 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:12.988 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:13.573 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.573 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:13.573 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.573 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.573 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.573 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.573 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.573 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.834 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.098 00:16:14.098 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.098 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.098 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.098 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.099 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.099 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.099 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.359 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.359 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.359 { 00:16:14.359 "cntlid": 7, 00:16:14.359 "qid": 0, 00:16:14.359 "state": "enabled", 00:16:14.359 "thread": "nvmf_tgt_poll_group_000", 00:16:14.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:14.360 "listen_address": { 00:16:14.360 "trtype": "TCP", 00:16:14.360 "adrfam": "IPv4", 00:16:14.360 "traddr": "10.0.0.2", 00:16:14.360 "trsvcid": "4420" 00:16:14.360 }, 00:16:14.360 "peer_address": { 00:16:14.360 "trtype": "TCP", 00:16:14.360 "adrfam": "IPv4", 00:16:14.360 "traddr": "10.0.0.1", 00:16:14.360 "trsvcid": "57772" 00:16:14.360 }, 00:16:14.360 "auth": { 00:16:14.360 "state": "completed", 00:16:14.360 "digest": "sha256", 00:16:14.360 "dhgroup": "null" 00:16:14.360 } 00:16:14.360 } 00:16:14.360 ]' 00:16:14.360 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.360 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.360 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.360 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:14.360 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.360 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.360 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.360 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.620 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:14.620 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:15.192 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.192 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.192 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.192 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.192 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.192 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.192 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.192 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.192 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.453 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.714 00:16:15.714 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.714 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.714 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.714 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.714 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.714 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.714 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.714 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.714 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.714 { 00:16:15.714 "cntlid": 9, 00:16:15.714 "qid": 0, 00:16:15.714 "state": "enabled", 00:16:15.714 "thread": "nvmf_tgt_poll_group_000", 00:16:15.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:15.714 "listen_address": { 00:16:15.714 "trtype": "TCP", 00:16:15.714 "adrfam": "IPv4", 00:16:15.714 "traddr": "10.0.0.2", 00:16:15.714 "trsvcid": "4420" 00:16:15.714 }, 00:16:15.714 "peer_address": { 00:16:15.714 "trtype": "TCP", 00:16:15.714 "adrfam": "IPv4", 00:16:15.714 "traddr": "10.0.0.1", 00:16:15.714 "trsvcid": "57800" 00:16:15.714 }, 00:16:15.714 "auth": { 00:16:15.714 "state": "completed", 00:16:15.714 "digest": "sha256", 00:16:15.714 "dhgroup": "ffdhe2048" 00:16:15.714 } 00:16:15.714 } 00:16:15.714 ]' 00:16:15.714 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.975 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.975 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.975 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.975 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.975 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.975 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.975 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.236 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:16.237 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:16.809 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.809 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:16.809 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.809 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.809 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.809 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.809 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.809 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.070 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.331 00:16:17.331 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.331 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.331 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.331 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.592 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.592 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.593 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.593 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.593 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.593 { 00:16:17.593 "cntlid": 11, 00:16:17.593 "qid": 0, 00:16:17.593 "state": "enabled", 00:16:17.593 "thread": "nvmf_tgt_poll_group_000", 00:16:17.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:17.593 "listen_address": { 00:16:17.593 "trtype": "TCP", 00:16:17.593 "adrfam": "IPv4", 00:16:17.593 "traddr": "10.0.0.2", 00:16:17.593 "trsvcid": "4420" 00:16:17.593 }, 00:16:17.593 "peer_address": { 00:16:17.593 "trtype": "TCP", 00:16:17.593 "adrfam": "IPv4", 00:16:17.593 "traddr": "10.0.0.1", 00:16:17.593 "trsvcid": "57826" 00:16:17.593 }, 00:16:17.593 "auth": { 00:16:17.593 "state": "completed", 00:16:17.593 "digest": "sha256", 00:16:17.593 "dhgroup": "ffdhe2048" 00:16:17.593 } 00:16:17.593 } 00:16:17.593 ]' 00:16:17.593 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.593 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.593 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.593 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.593 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.593 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.593 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.593 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.854 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:17.854 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:18.424 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.424 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.424 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.424 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.424 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.424 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.424 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.424 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.685 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.946 00:16:18.946 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.946 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.946 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.946 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.946 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.946 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.946 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.946 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.946 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.946 { 00:16:18.946 "cntlid": 13, 00:16:18.946 "qid": 0, 00:16:18.946 "state": "enabled", 00:16:18.946 "thread": "nvmf_tgt_poll_group_000", 00:16:18.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:18.946 "listen_address": { 00:16:18.946 "trtype": "TCP", 00:16:18.946 "adrfam": "IPv4", 00:16:18.946 "traddr": "10.0.0.2", 00:16:18.946 "trsvcid": "4420" 00:16:18.946 }, 00:16:18.946 "peer_address": { 00:16:18.946 "trtype": "TCP", 00:16:18.946 "adrfam": "IPv4", 00:16:18.946 "traddr": "10.0.0.1", 00:16:18.946 "trsvcid": "57850" 00:16:18.946 }, 00:16:18.946 "auth": { 00:16:18.946 "state": "completed", 00:16:18.946 "digest": "sha256", 00:16:18.946 "dhgroup": "ffdhe2048" 00:16:18.946 } 00:16:18.946 } 00:16:18.946 ]' 00:16:19.206 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.206 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.206 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.206 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.206 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.206 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.206 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.207 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.467 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:19.468 09:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:20.040 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.040 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:20.040 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.040 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.040 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.040 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.040 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.040 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.301 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.301 00:16:20.561 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.561 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.561 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.561 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.561 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.561 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.561 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.561 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.561 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.561 { 00:16:20.561 "cntlid": 15, 00:16:20.561 "qid": 0, 00:16:20.561 "state": "enabled", 00:16:20.561 "thread": "nvmf_tgt_poll_group_000", 00:16:20.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:20.561 "listen_address": { 00:16:20.561 "trtype": "TCP", 00:16:20.561 "adrfam": "IPv4", 00:16:20.561 "traddr": "10.0.0.2", 00:16:20.561 "trsvcid": "4420" 00:16:20.561 }, 00:16:20.561 "peer_address": { 00:16:20.561 "trtype": "TCP", 00:16:20.561 "adrfam": "IPv4", 00:16:20.561 "traddr": "10.0.0.1", 00:16:20.561 "trsvcid": "43292" 00:16:20.561 }, 00:16:20.561 "auth": { 00:16:20.561 "state": "completed", 00:16:20.561 "digest": "sha256", 00:16:20.561 "dhgroup": "ffdhe2048" 00:16:20.561 } 00:16:20.561 } 00:16:20.561 ]' 00:16:20.561 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.561 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.822 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.822 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.822 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.822 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.822 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.822 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.083 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:21.083 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:21.654 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.654 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.654 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.654 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.654 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.654 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.654 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.654 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.654 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.915 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.915 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.182 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.182 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.182 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.182 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.182 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.182 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.182 { 00:16:22.182 "cntlid": 17, 00:16:22.182 "qid": 0, 00:16:22.182 "state": "enabled", 00:16:22.182 "thread": "nvmf_tgt_poll_group_000", 00:16:22.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:22.182 "listen_address": { 00:16:22.182 "trtype": "TCP", 00:16:22.182 "adrfam": "IPv4", 00:16:22.182 "traddr": "10.0.0.2", 00:16:22.182 "trsvcid": "4420" 00:16:22.182 }, 00:16:22.182 "peer_address": { 00:16:22.182 "trtype": "TCP", 00:16:22.182 "adrfam": "IPv4", 00:16:22.182 "traddr": "10.0.0.1", 00:16:22.182 "trsvcid": "43310" 00:16:22.182 }, 00:16:22.182 "auth": { 00:16:22.182 "state": "completed", 00:16:22.182 "digest": "sha256", 00:16:22.182 "dhgroup": "ffdhe3072" 00:16:22.182 } 00:16:22.182 } 00:16:22.182 ]' 00:16:22.182 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.182 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.182 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.499 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.499 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.499 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.499 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.499 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.499 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:22.499 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:23.128 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.128 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.128 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.128 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.128 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.128 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.128 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.128 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.394 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.654 00:16:23.654 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.654 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.654 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.913 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.913 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.913 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.913 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.913 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.913 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.913 { 00:16:23.914 "cntlid": 19, 00:16:23.914 "qid": 0, 00:16:23.914 "state": "enabled", 00:16:23.914 "thread": "nvmf_tgt_poll_group_000", 00:16:23.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:23.914 "listen_address": { 00:16:23.914 "trtype": "TCP", 00:16:23.914 "adrfam": "IPv4", 00:16:23.914 "traddr": "10.0.0.2", 00:16:23.914 "trsvcid": "4420" 00:16:23.914 }, 00:16:23.914 "peer_address": { 00:16:23.914 "trtype": "TCP", 00:16:23.914 "adrfam": "IPv4", 00:16:23.914 "traddr": "10.0.0.1", 00:16:23.914 "trsvcid": "43356" 00:16:23.914 }, 00:16:23.914 "auth": { 00:16:23.914 "state": "completed", 00:16:23.914 "digest": "sha256", 00:16:23.914 "dhgroup": "ffdhe3072" 00:16:23.914 } 00:16:23.914 } 00:16:23.914 ]' 00:16:23.914 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.914 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.914 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.914 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.914 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.914 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.914 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.914 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.174 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:24.174 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:24.747 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.747 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.747 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.747 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.747 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.747 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.747 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.747 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.007 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:25.007 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.007 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.007 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:25.007 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.007 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.007 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.008 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.008 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.008 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.008 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.008 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.008 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.268 00:16:25.268 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.268 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.268 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.529 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.529 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.529 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.529 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.529 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.529 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.529 { 00:16:25.529 "cntlid": 21, 00:16:25.529 "qid": 0, 00:16:25.529 "state": "enabled", 00:16:25.529 "thread": "nvmf_tgt_poll_group_000", 00:16:25.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:25.529 "listen_address": { 00:16:25.529 "trtype": "TCP", 00:16:25.529 "adrfam": "IPv4", 00:16:25.529 "traddr": "10.0.0.2", 00:16:25.529 "trsvcid": "4420" 00:16:25.529 }, 00:16:25.529 "peer_address": { 00:16:25.529 "trtype": "TCP", 00:16:25.529 "adrfam": "IPv4", 00:16:25.529 "traddr": "10.0.0.1", 00:16:25.529 "trsvcid": "43382" 00:16:25.529 }, 00:16:25.529 "auth": { 00:16:25.529 "state": "completed", 00:16:25.529 "digest": "sha256", 00:16:25.529 "dhgroup": "ffdhe3072" 00:16:25.529 } 00:16:25.529 } 00:16:25.529 ]' 00:16:25.529 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.529 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.529 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.529 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.529 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.529 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.529 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.529 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.790 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:25.790 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:26.361 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.621 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.621 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.621 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.621 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.621 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.621 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.621 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.621 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.880 00:16:26.880 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.880 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.880 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.139 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.139 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.139 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.139 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.139 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.139 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.139 { 00:16:27.139 "cntlid": 23, 00:16:27.139 "qid": 0, 00:16:27.139 "state": "enabled", 00:16:27.139 "thread": "nvmf_tgt_poll_group_000", 00:16:27.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:27.139 "listen_address": { 00:16:27.139 "trtype": "TCP", 00:16:27.139 "adrfam": "IPv4", 00:16:27.140 "traddr": "10.0.0.2", 00:16:27.140 "trsvcid": "4420" 00:16:27.140 }, 00:16:27.140 "peer_address": { 00:16:27.140 "trtype": "TCP", 00:16:27.140 "adrfam": "IPv4", 00:16:27.140 "traddr": "10.0.0.1", 00:16:27.140 "trsvcid": "43416" 00:16:27.140 }, 00:16:27.140 "auth": { 00:16:27.140 "state": "completed", 00:16:27.140 "digest": "sha256", 00:16:27.140 "dhgroup": "ffdhe3072" 00:16:27.140 } 00:16:27.140 } 00:16:27.140 ]' 00:16:27.140 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.140 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.140 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.140 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.140 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.140 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.140 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.140 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.400 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:27.400 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:27.970 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.231 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.492 00:16:28.492 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.492 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.492 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.752 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.752 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.752 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.752 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.752 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.752 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.752 { 00:16:28.752 "cntlid": 25, 00:16:28.752 "qid": 0, 00:16:28.752 "state": "enabled", 00:16:28.752 "thread": "nvmf_tgt_poll_group_000", 00:16:28.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:28.752 "listen_address": { 00:16:28.752 "trtype": "TCP", 00:16:28.752 "adrfam": "IPv4", 00:16:28.752 "traddr": "10.0.0.2", 00:16:28.752 "trsvcid": "4420" 00:16:28.752 }, 00:16:28.752 "peer_address": { 00:16:28.752 "trtype": "TCP", 00:16:28.752 "adrfam": "IPv4", 00:16:28.752 "traddr": "10.0.0.1", 00:16:28.752 "trsvcid": "43446" 00:16:28.752 }, 00:16:28.752 "auth": { 00:16:28.752 "state": "completed", 00:16:28.752 "digest": "sha256", 00:16:28.752 "dhgroup": "ffdhe4096" 00:16:28.752 } 00:16:28.752 } 00:16:28.752 ]' 00:16:28.752 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.752 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.752 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.752 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.752 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.013 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.013 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.013 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.013 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:29.013 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.955 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.215 00:16:30.215 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.215 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.215 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.476 { 00:16:30.476 "cntlid": 27, 00:16:30.476 "qid": 0, 00:16:30.476 "state": "enabled", 00:16:30.476 "thread": "nvmf_tgt_poll_group_000", 00:16:30.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:30.476 "listen_address": { 00:16:30.476 "trtype": "TCP", 00:16:30.476 "adrfam": "IPv4", 00:16:30.476 "traddr": "10.0.0.2", 00:16:30.476 "trsvcid": "4420" 00:16:30.476 }, 00:16:30.476 "peer_address": { 00:16:30.476 "trtype": "TCP", 00:16:30.476 "adrfam": "IPv4", 00:16:30.476 "traddr": "10.0.0.1", 00:16:30.476 "trsvcid": "37274" 00:16:30.476 }, 00:16:30.476 "auth": { 00:16:30.476 "state": "completed", 00:16:30.476 "digest": "sha256", 00:16:30.476 "dhgroup": "ffdhe4096" 00:16:30.476 } 00:16:30.476 } 00:16:30.476 ]' 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.476 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.737 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:30.737 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:31.308 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.308 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.308 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.308 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.308 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.308 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.308 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.308 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.568 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.828 00:16:31.828 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.828 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.828 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.088 { 00:16:32.088 "cntlid": 29, 00:16:32.088 "qid": 0, 00:16:32.088 "state": "enabled", 00:16:32.088 "thread": "nvmf_tgt_poll_group_000", 00:16:32.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:32.088 "listen_address": { 00:16:32.088 "trtype": "TCP", 00:16:32.088 "adrfam": "IPv4", 00:16:32.088 "traddr": "10.0.0.2", 00:16:32.088 "trsvcid": "4420" 00:16:32.088 }, 00:16:32.088 "peer_address": { 00:16:32.088 "trtype": "TCP", 00:16:32.088 "adrfam": "IPv4", 00:16:32.088 "traddr": "10.0.0.1", 00:16:32.088 "trsvcid": "37292" 00:16:32.088 }, 00:16:32.088 "auth": { 00:16:32.088 "state": "completed", 00:16:32.088 "digest": "sha256", 00:16:32.088 "dhgroup": "ffdhe4096" 00:16:32.088 } 00:16:32.088 } 00:16:32.088 ]' 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.088 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.348 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:32.348 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:32.919 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.919 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.919 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.919 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.919 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.919 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.919 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:32.919 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.180 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.441 00:16:33.441 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.441 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.441 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.701 { 00:16:33.701 "cntlid": 31, 00:16:33.701 "qid": 0, 00:16:33.701 "state": "enabled", 00:16:33.701 "thread": "nvmf_tgt_poll_group_000", 00:16:33.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:33.701 "listen_address": { 00:16:33.701 "trtype": "TCP", 00:16:33.701 "adrfam": "IPv4", 00:16:33.701 "traddr": "10.0.0.2", 00:16:33.701 "trsvcid": "4420" 00:16:33.701 }, 00:16:33.701 "peer_address": { 00:16:33.701 "trtype": "TCP", 00:16:33.701 "adrfam": "IPv4", 00:16:33.701 "traddr": "10.0.0.1", 00:16:33.701 "trsvcid": "37312" 00:16:33.701 }, 00:16:33.701 "auth": { 00:16:33.701 "state": "completed", 00:16:33.701 "digest": "sha256", 00:16:33.701 "dhgroup": "ffdhe4096" 00:16:33.701 } 00:16:33.701 } 00:16:33.701 ]' 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.701 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.962 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:33.962 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:34.534 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.534 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.534 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.534 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.534 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.534 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.534 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.534 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.534 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.795 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.056 00:16:35.056 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.056 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.057 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.318 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.318 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.318 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.318 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.318 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.318 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.318 { 00:16:35.318 "cntlid": 33, 00:16:35.318 "qid": 0, 00:16:35.318 "state": "enabled", 00:16:35.318 "thread": "nvmf_tgt_poll_group_000", 00:16:35.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:35.318 "listen_address": { 00:16:35.318 "trtype": "TCP", 00:16:35.318 "adrfam": "IPv4", 00:16:35.318 "traddr": "10.0.0.2", 00:16:35.318 "trsvcid": "4420" 00:16:35.318 }, 00:16:35.318 "peer_address": { 00:16:35.318 "trtype": "TCP", 00:16:35.318 "adrfam": "IPv4", 00:16:35.318 "traddr": "10.0.0.1", 00:16:35.318 "trsvcid": "37342" 00:16:35.318 }, 00:16:35.318 "auth": { 00:16:35.318 "state": "completed", 00:16:35.318 "digest": "sha256", 00:16:35.318 "dhgroup": "ffdhe6144" 00:16:35.318 } 00:16:35.318 } 00:16:35.318 ]' 00:16:35.318 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.318 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.318 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.579 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.579 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.579 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.579 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.579 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.579 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:35.579 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.520 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.521 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.521 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.521 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.782 00:16:36.782 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.782 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.782 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.043 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.043 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.043 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.043 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.043 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.043 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.043 { 00:16:37.043 "cntlid": 35, 00:16:37.043 "qid": 0, 00:16:37.043 "state": "enabled", 00:16:37.043 "thread": "nvmf_tgt_poll_group_000", 00:16:37.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:37.043 "listen_address": { 00:16:37.043 "trtype": "TCP", 00:16:37.043 "adrfam": "IPv4", 00:16:37.043 "traddr": "10.0.0.2", 00:16:37.043 "trsvcid": "4420" 00:16:37.043 }, 00:16:37.043 "peer_address": { 00:16:37.043 "trtype": "TCP", 00:16:37.043 "adrfam": "IPv4", 00:16:37.043 "traddr": "10.0.0.1", 00:16:37.043 "trsvcid": "37376" 00:16:37.043 }, 00:16:37.043 "auth": { 00:16:37.043 "state": "completed", 00:16:37.043 "digest": "sha256", 00:16:37.043 "dhgroup": "ffdhe6144" 00:16:37.043 } 00:16:37.043 } 00:16:37.043 ]' 00:16:37.043 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.043 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.043 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.303 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.304 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.304 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.304 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.304 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.304 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:37.304 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.244 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.505 00:16:38.505 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.505 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.505 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.767 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.767 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.767 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.767 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.767 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.767 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.767 { 00:16:38.767 "cntlid": 37, 00:16:38.767 "qid": 0, 00:16:38.767 "state": "enabled", 00:16:38.767 "thread": "nvmf_tgt_poll_group_000", 00:16:38.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.767 "listen_address": { 00:16:38.767 "trtype": "TCP", 00:16:38.767 "adrfam": "IPv4", 00:16:38.767 "traddr": "10.0.0.2", 00:16:38.767 "trsvcid": "4420" 00:16:38.767 }, 00:16:38.767 "peer_address": { 00:16:38.767 "trtype": "TCP", 00:16:38.767 "adrfam": "IPv4", 00:16:38.767 "traddr": "10.0.0.1", 00:16:38.767 "trsvcid": "37406" 00:16:38.767 }, 00:16:38.767 "auth": { 00:16:38.767 "state": "completed", 00:16:38.767 "digest": "sha256", 00:16:38.767 "dhgroup": "ffdhe6144" 00:16:38.767 } 00:16:38.767 } 00:16:38.767 ]' 00:16:38.767 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.767 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.767 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.767 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.767 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.027 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.027 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.027 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.027 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:39.027 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.969 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.230 00:16:40.230 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.230 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.230 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.491 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.491 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.491 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.491 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.491 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.491 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.491 { 00:16:40.491 "cntlid": 39, 00:16:40.491 "qid": 0, 00:16:40.491 "state": "enabled", 00:16:40.491 "thread": "nvmf_tgt_poll_group_000", 00:16:40.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:40.491 "listen_address": { 00:16:40.491 "trtype": "TCP", 00:16:40.491 "adrfam": "IPv4", 00:16:40.491 "traddr": "10.0.0.2", 00:16:40.491 "trsvcid": "4420" 00:16:40.491 }, 00:16:40.491 "peer_address": { 00:16:40.491 "trtype": "TCP", 00:16:40.491 "adrfam": "IPv4", 00:16:40.491 "traddr": "10.0.0.1", 00:16:40.491 "trsvcid": "46898" 00:16:40.491 }, 00:16:40.491 "auth": { 00:16:40.491 "state": "completed", 00:16:40.491 "digest": "sha256", 00:16:40.491 "dhgroup": "ffdhe6144" 00:16:40.491 } 00:16:40.491 } 00:16:40.491 ]' 00:16:40.491 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.491 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.491 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.491 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.491 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.751 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.751 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.752 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.752 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:40.752 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:41.693 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.693 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.693 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.693 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.693 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.693 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.693 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.693 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.693 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.693 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.266 00:16:42.266 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.266 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.266 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.266 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.266 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.266 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.266 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.266 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.266 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.266 { 00:16:42.266 "cntlid": 41, 00:16:42.266 "qid": 0, 00:16:42.266 "state": "enabled", 00:16:42.266 "thread": "nvmf_tgt_poll_group_000", 00:16:42.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:42.266 "listen_address": { 00:16:42.266 "trtype": "TCP", 00:16:42.266 "adrfam": "IPv4", 00:16:42.266 "traddr": "10.0.0.2", 00:16:42.266 "trsvcid": "4420" 00:16:42.266 }, 00:16:42.266 "peer_address": { 00:16:42.266 "trtype": "TCP", 00:16:42.266 "adrfam": "IPv4", 00:16:42.266 "traddr": "10.0.0.1", 00:16:42.266 "trsvcid": "46932" 00:16:42.266 }, 00:16:42.266 "auth": { 00:16:42.266 "state": "completed", 00:16:42.266 "digest": "sha256", 00:16:42.266 "dhgroup": "ffdhe8192" 00:16:42.266 } 00:16:42.266 } 00:16:42.266 ]' 00:16:42.266 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.527 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.527 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.527 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.527 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.527 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.527 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.527 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.789 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:42.789 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:43.361 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.361 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.361 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.361 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.361 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.361 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.361 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.361 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.622 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.191 00:16:44.191 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.191 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.191 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.191 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.191 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.191 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.191 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.191 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.191 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.191 { 00:16:44.191 "cntlid": 43, 00:16:44.191 "qid": 0, 00:16:44.191 "state": "enabled", 00:16:44.191 "thread": "nvmf_tgt_poll_group_000", 00:16:44.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.191 "listen_address": { 00:16:44.191 "trtype": "TCP", 00:16:44.191 "adrfam": "IPv4", 00:16:44.191 "traddr": "10.0.0.2", 00:16:44.191 "trsvcid": "4420" 00:16:44.191 }, 00:16:44.191 "peer_address": { 00:16:44.191 "trtype": "TCP", 00:16:44.191 "adrfam": "IPv4", 00:16:44.191 "traddr": "10.0.0.1", 00:16:44.191 "trsvcid": "46956" 00:16:44.191 }, 00:16:44.191 "auth": { 00:16:44.192 "state": "completed", 00:16:44.192 "digest": "sha256", 00:16:44.192 "dhgroup": "ffdhe8192" 00:16:44.192 } 00:16:44.192 } 00:16:44.192 ]' 00:16:44.192 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.192 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.192 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.452 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.452 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.452 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.452 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.452 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.452 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:44.453 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.395 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.967 00:16:45.967 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.967 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.967 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.967 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.967 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.967 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.967 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.967 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.967 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.967 { 00:16:45.967 "cntlid": 45, 00:16:45.967 "qid": 0, 00:16:45.967 "state": "enabled", 00:16:45.967 "thread": "nvmf_tgt_poll_group_000", 00:16:45.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:45.967 "listen_address": { 00:16:45.967 "trtype": "TCP", 00:16:45.967 "adrfam": "IPv4", 00:16:45.967 "traddr": "10.0.0.2", 00:16:45.967 "trsvcid": "4420" 00:16:45.967 }, 00:16:45.967 "peer_address": { 00:16:45.967 "trtype": "TCP", 00:16:45.967 "adrfam": "IPv4", 00:16:45.967 "traddr": "10.0.0.1", 00:16:45.967 "trsvcid": "46984" 00:16:45.967 }, 00:16:45.967 "auth": { 00:16:45.967 "state": "completed", 00:16:45.967 "digest": "sha256", 00:16:45.967 "dhgroup": "ffdhe8192" 00:16:45.967 } 00:16:45.967 } 00:16:45.967 ]' 00:16:45.967 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.227 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.227 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.227 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.227 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.227 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.227 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.227 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.487 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:46.487 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:47.058 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.058 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.058 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.058 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.058 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.058 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.058 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.058 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.318 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.578 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.839 { 00:16:47.839 "cntlid": 47, 00:16:47.839 "qid": 0, 00:16:47.839 "state": "enabled", 00:16:47.839 "thread": "nvmf_tgt_poll_group_000", 00:16:47.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.839 "listen_address": { 00:16:47.839 "trtype": "TCP", 00:16:47.839 "adrfam": "IPv4", 00:16:47.839 "traddr": "10.0.0.2", 00:16:47.839 "trsvcid": "4420" 00:16:47.839 }, 00:16:47.839 "peer_address": { 00:16:47.839 "trtype": "TCP", 00:16:47.839 "adrfam": "IPv4", 00:16:47.839 "traddr": "10.0.0.1", 00:16:47.839 "trsvcid": "47006" 00:16:47.839 }, 00:16:47.839 "auth": { 00:16:47.839 "state": "completed", 00:16:47.839 "digest": "sha256", 00:16:47.839 "dhgroup": "ffdhe8192" 00:16:47.839 } 00:16:47.839 } 00:16:47.839 ]' 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.839 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.103 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.103 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.103 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.103 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.103 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.103 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:48.103 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.044 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.045 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.305 00:16:49.305 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.305 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.305 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.566 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.566 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.566 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.566 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.566 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.566 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.566 { 00:16:49.566 "cntlid": 49, 00:16:49.566 "qid": 0, 00:16:49.566 "state": "enabled", 00:16:49.566 "thread": "nvmf_tgt_poll_group_000", 00:16:49.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.566 "listen_address": { 00:16:49.566 "trtype": "TCP", 00:16:49.566 "adrfam": "IPv4", 00:16:49.566 "traddr": "10.0.0.2", 00:16:49.566 "trsvcid": "4420" 00:16:49.566 }, 00:16:49.566 "peer_address": { 00:16:49.566 "trtype": "TCP", 00:16:49.566 "adrfam": "IPv4", 00:16:49.566 "traddr": "10.0.0.1", 00:16:49.566 "trsvcid": "47034" 00:16:49.566 }, 00:16:49.566 "auth": { 00:16:49.566 "state": "completed", 00:16:49.566 "digest": "sha384", 00:16:49.566 "dhgroup": "null" 00:16:49.566 } 00:16:49.566 } 00:16:49.566 ]' 00:16:49.566 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.566 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.566 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.566 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:49.566 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.566 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.566 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.566 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.827 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:49.827 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:50.397 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.397 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.397 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.397 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.397 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.397 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.397 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.397 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.658 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.918 00:16:50.918 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.918 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.918 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.179 { 00:16:51.179 "cntlid": 51, 00:16:51.179 "qid": 0, 00:16:51.179 "state": "enabled", 00:16:51.179 "thread": "nvmf_tgt_poll_group_000", 00:16:51.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:51.179 "listen_address": { 00:16:51.179 "trtype": "TCP", 00:16:51.179 "adrfam": "IPv4", 00:16:51.179 "traddr": "10.0.0.2", 00:16:51.179 "trsvcid": "4420" 00:16:51.179 }, 00:16:51.179 "peer_address": { 00:16:51.179 "trtype": "TCP", 00:16:51.179 "adrfam": "IPv4", 00:16:51.179 "traddr": "10.0.0.1", 00:16:51.179 "trsvcid": "37296" 00:16:51.179 }, 00:16:51.179 "auth": { 00:16:51.179 "state": "completed", 00:16:51.179 "digest": "sha384", 00:16:51.179 "dhgroup": "null" 00:16:51.179 } 00:16:51.179 } 00:16:51.179 ]' 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.179 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.439 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:51.439 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:52.009 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.009 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.009 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.009 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.269 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.529 00:16:52.529 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.529 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.529 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.789 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.789 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.789 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.789 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.790 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.790 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.790 { 00:16:52.790 "cntlid": 53, 00:16:52.790 "qid": 0, 00:16:52.790 "state": "enabled", 00:16:52.790 "thread": "nvmf_tgt_poll_group_000", 00:16:52.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.790 "listen_address": { 00:16:52.790 "trtype": "TCP", 00:16:52.790 "adrfam": "IPv4", 00:16:52.790 "traddr": "10.0.0.2", 00:16:52.790 "trsvcid": "4420" 00:16:52.790 }, 00:16:52.790 "peer_address": { 00:16:52.790 "trtype": "TCP", 00:16:52.790 "adrfam": "IPv4", 00:16:52.790 "traddr": "10.0.0.1", 00:16:52.790 "trsvcid": "37330" 00:16:52.790 }, 00:16:52.790 "auth": { 00:16:52.790 "state": "completed", 00:16:52.790 "digest": "sha384", 00:16:52.790 "dhgroup": "null" 00:16:52.790 } 00:16:52.790 } 00:16:52.790 ]' 00:16:52.790 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.790 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.790 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.790 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:52.790 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.790 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.790 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.790 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.050 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:53.050 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:53.991 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.991 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.991 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.991 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.991 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.991 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.991 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.991 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.991 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:53.991 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.991 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.992 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:53.992 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.992 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.992 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:53.992 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.992 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.992 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.992 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.992 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.992 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.252 00:16:54.252 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.252 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.252 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.252 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.512 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.512 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.512 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.512 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.512 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.512 { 00:16:54.512 "cntlid": 55, 00:16:54.512 "qid": 0, 00:16:54.512 "state": "enabled", 00:16:54.512 "thread": "nvmf_tgt_poll_group_000", 00:16:54.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.512 "listen_address": { 00:16:54.512 "trtype": "TCP", 00:16:54.512 "adrfam": "IPv4", 00:16:54.512 "traddr": "10.0.0.2", 00:16:54.513 "trsvcid": "4420" 00:16:54.513 }, 00:16:54.513 "peer_address": { 00:16:54.513 "trtype": "TCP", 00:16:54.513 "adrfam": "IPv4", 00:16:54.513 "traddr": "10.0.0.1", 00:16:54.513 "trsvcid": "37356" 00:16:54.513 }, 00:16:54.513 "auth": { 00:16:54.513 "state": "completed", 00:16:54.513 "digest": "sha384", 00:16:54.513 "dhgroup": "null" 00:16:54.513 } 00:16:54.513 } 00:16:54.513 ]' 00:16:54.513 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.513 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.513 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.513 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:54.513 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.513 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.513 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.513 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.774 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:54.774 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:16:55.346 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.346 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.346 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.346 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.346 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.346 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.346 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.346 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.346 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.607 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.868 00:16:55.868 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.868 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.868 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.868 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.868 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.868 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.868 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.868 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.868 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.868 { 00:16:55.868 "cntlid": 57, 00:16:55.868 "qid": 0, 00:16:55.868 "state": "enabled", 00:16:55.868 "thread": "nvmf_tgt_poll_group_000", 00:16:55.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.868 "listen_address": { 00:16:55.868 "trtype": "TCP", 00:16:55.868 "adrfam": "IPv4", 00:16:55.868 "traddr": "10.0.0.2", 00:16:55.868 "trsvcid": "4420" 00:16:55.868 }, 00:16:55.868 "peer_address": { 00:16:55.868 "trtype": "TCP", 00:16:55.868 "adrfam": "IPv4", 00:16:55.868 "traddr": "10.0.0.1", 00:16:55.868 "trsvcid": "37392" 00:16:55.868 }, 00:16:55.868 "auth": { 00:16:55.868 "state": "completed", 00:16:55.868 "digest": "sha384", 00:16:55.868 "dhgroup": "ffdhe2048" 00:16:55.868 } 00:16:55.868 } 00:16:55.868 ]' 00:16:55.868 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.128 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.128 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.128 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.128 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.128 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.128 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.128 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.388 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:56.388 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:16:56.959 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.959 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.959 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.959 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.959 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.959 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.959 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:56.959 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.220 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.481 00:16:57.481 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.481 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.481 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.481 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.481 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.481 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.481 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.481 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.481 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.481 { 00:16:57.481 "cntlid": 59, 00:16:57.481 "qid": 0, 00:16:57.481 "state": "enabled", 00:16:57.481 "thread": "nvmf_tgt_poll_group_000", 00:16:57.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.481 "listen_address": { 00:16:57.481 "trtype": "TCP", 00:16:57.481 "adrfam": "IPv4", 00:16:57.481 "traddr": "10.0.0.2", 00:16:57.481 "trsvcid": "4420" 00:16:57.481 }, 00:16:57.481 "peer_address": { 00:16:57.481 "trtype": "TCP", 00:16:57.481 "adrfam": "IPv4", 00:16:57.481 "traddr": "10.0.0.1", 00:16:57.481 "trsvcid": "37428" 00:16:57.481 }, 00:16:57.481 "auth": { 00:16:57.481 "state": "completed", 00:16:57.481 "digest": "sha384", 00:16:57.481 "dhgroup": "ffdhe2048" 00:16:57.481 } 00:16:57.481 } 00:16:57.481 ]' 00:16:57.481 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.481 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.741 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.741 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.741 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.741 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.741 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.741 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.741 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:57.741 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:16:58.682 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.682 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.682 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.682 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.682 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.682 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.682 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.682 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.682 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:58.682 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.682 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.682 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:58.682 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.682 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.682 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.682 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.682 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.682 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.683 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.683 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.683 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.944 00:16:58.944 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.944 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.944 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.205 { 00:16:59.205 "cntlid": 61, 00:16:59.205 "qid": 0, 00:16:59.205 "state": "enabled", 00:16:59.205 "thread": "nvmf_tgt_poll_group_000", 00:16:59.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.205 "listen_address": { 00:16:59.205 "trtype": "TCP", 00:16:59.205 "adrfam": "IPv4", 00:16:59.205 "traddr": "10.0.0.2", 00:16:59.205 "trsvcid": "4420" 00:16:59.205 }, 00:16:59.205 "peer_address": { 00:16:59.205 "trtype": "TCP", 00:16:59.205 "adrfam": "IPv4", 00:16:59.205 "traddr": "10.0.0.1", 00:16:59.205 "trsvcid": "37452" 00:16:59.205 }, 00:16:59.205 "auth": { 00:16:59.205 "state": "completed", 00:16:59.205 "digest": "sha384", 00:16:59.205 "dhgroup": "ffdhe2048" 00:16:59.205 } 00:16:59.205 } 00:16:59.205 ]' 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.205 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.465 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:16:59.465 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:00.038 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.038 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.038 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.038 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.038 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.038 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.038 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.038 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.299 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.585 00:17:00.585 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.585 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.585 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.585 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.585 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.585 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.585 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.585 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.585 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.585 { 00:17:00.585 "cntlid": 63, 00:17:00.585 "qid": 0, 00:17:00.585 "state": "enabled", 00:17:00.585 "thread": "nvmf_tgt_poll_group_000", 00:17:00.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.585 "listen_address": { 00:17:00.585 "trtype": "TCP", 00:17:00.585 "adrfam": "IPv4", 00:17:00.585 "traddr": "10.0.0.2", 00:17:00.585 "trsvcid": "4420" 00:17:00.585 }, 00:17:00.585 "peer_address": { 00:17:00.585 "trtype": "TCP", 00:17:00.585 "adrfam": "IPv4", 00:17:00.585 "traddr": "10.0.0.1", 00:17:00.585 "trsvcid": "50816" 00:17:00.585 }, 00:17:00.585 "auth": { 00:17:00.585 "state": "completed", 00:17:00.585 "digest": "sha384", 00:17:00.585 "dhgroup": "ffdhe2048" 00:17:00.585 } 00:17:00.585 } 00:17:00.585 ]' 00:17:00.585 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.913 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.913 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.913 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.913 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.913 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.913 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.913 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.913 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:00.913 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:01.518 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.518 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.518 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.518 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.518 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.518 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.518 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.518 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.518 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.778 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.038 00:17:02.038 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.038 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.038 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.299 { 00:17:02.299 "cntlid": 65, 00:17:02.299 "qid": 0, 00:17:02.299 "state": "enabled", 00:17:02.299 "thread": "nvmf_tgt_poll_group_000", 00:17:02.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.299 "listen_address": { 00:17:02.299 "trtype": "TCP", 00:17:02.299 "adrfam": "IPv4", 00:17:02.299 "traddr": "10.0.0.2", 00:17:02.299 "trsvcid": "4420" 00:17:02.299 }, 00:17:02.299 "peer_address": { 00:17:02.299 "trtype": "TCP", 00:17:02.299 "adrfam": "IPv4", 00:17:02.299 "traddr": "10.0.0.1", 00:17:02.299 "trsvcid": "50842" 00:17:02.299 }, 00:17:02.299 "auth": { 00:17:02.299 "state": "completed", 00:17:02.299 "digest": "sha384", 00:17:02.299 "dhgroup": "ffdhe3072" 00:17:02.299 } 00:17:02.299 } 00:17:02.299 ]' 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.299 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.561 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.561 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.561 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:02.561 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:03.132 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.394 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.655 00:17:03.655 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.655 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.655 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.914 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.914 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.914 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.914 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.914 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.914 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.914 { 00:17:03.914 "cntlid": 67, 00:17:03.914 "qid": 0, 00:17:03.914 "state": "enabled", 00:17:03.914 "thread": "nvmf_tgt_poll_group_000", 00:17:03.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.914 "listen_address": { 00:17:03.914 "trtype": "TCP", 00:17:03.914 "adrfam": "IPv4", 00:17:03.914 "traddr": "10.0.0.2", 00:17:03.914 "trsvcid": "4420" 00:17:03.914 }, 00:17:03.914 "peer_address": { 00:17:03.914 "trtype": "TCP", 00:17:03.914 "adrfam": "IPv4", 00:17:03.914 "traddr": "10.0.0.1", 00:17:03.914 "trsvcid": "50876" 00:17:03.914 }, 00:17:03.914 "auth": { 00:17:03.914 "state": "completed", 00:17:03.914 "digest": "sha384", 00:17:03.914 "dhgroup": "ffdhe3072" 00:17:03.914 } 00:17:03.914 } 00:17:03.914 ]' 00:17:03.914 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.914 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.914 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.914 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.914 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.174 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.174 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.174 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.175 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:04.175 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.116 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.377 00:17:05.377 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.377 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.377 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.637 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.637 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.637 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.637 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.637 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.637 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.637 { 00:17:05.637 "cntlid": 69, 00:17:05.637 "qid": 0, 00:17:05.637 "state": "enabled", 00:17:05.637 "thread": "nvmf_tgt_poll_group_000", 00:17:05.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.637 "listen_address": { 00:17:05.637 "trtype": "TCP", 00:17:05.637 "adrfam": "IPv4", 00:17:05.637 "traddr": "10.0.0.2", 00:17:05.637 "trsvcid": "4420" 00:17:05.637 }, 00:17:05.637 "peer_address": { 00:17:05.637 "trtype": "TCP", 00:17:05.637 "adrfam": "IPv4", 00:17:05.637 "traddr": "10.0.0.1", 00:17:05.637 "trsvcid": "50894" 00:17:05.637 }, 00:17:05.637 "auth": { 00:17:05.637 "state": "completed", 00:17:05.637 "digest": "sha384", 00:17:05.637 "dhgroup": "ffdhe3072" 00:17:05.637 } 00:17:05.637 } 00:17:05.637 ]' 00:17:05.637 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.637 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.637 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.637 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.637 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.637 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.637 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.637 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.898 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:05.898 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:06.469 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.469 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.469 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.469 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.469 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.469 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.469 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.469 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.729 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.990 00:17:06.990 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.990 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.990 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.250 { 00:17:07.250 "cntlid": 71, 00:17:07.250 "qid": 0, 00:17:07.250 "state": "enabled", 00:17:07.250 "thread": "nvmf_tgt_poll_group_000", 00:17:07.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.250 "listen_address": { 00:17:07.250 "trtype": "TCP", 00:17:07.250 "adrfam": "IPv4", 00:17:07.250 "traddr": "10.0.0.2", 00:17:07.250 "trsvcid": "4420" 00:17:07.250 }, 00:17:07.250 "peer_address": { 00:17:07.250 "trtype": "TCP", 00:17:07.250 "adrfam": "IPv4", 00:17:07.250 "traddr": "10.0.0.1", 00:17:07.250 "trsvcid": "50926" 00:17:07.250 }, 00:17:07.250 "auth": { 00:17:07.250 "state": "completed", 00:17:07.250 "digest": "sha384", 00:17:07.250 "dhgroup": "ffdhe3072" 00:17:07.250 } 00:17:07.250 } 00:17:07.250 ]' 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.250 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.510 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:07.510 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:08.080 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.340 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.600 00:17:08.600 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.600 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.600 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.861 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.861 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.861 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.861 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.861 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.861 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.861 { 00:17:08.861 "cntlid": 73, 00:17:08.861 "qid": 0, 00:17:08.861 "state": "enabled", 00:17:08.861 "thread": "nvmf_tgt_poll_group_000", 00:17:08.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.861 "listen_address": { 00:17:08.861 "trtype": "TCP", 00:17:08.861 "adrfam": "IPv4", 00:17:08.861 "traddr": "10.0.0.2", 00:17:08.861 "trsvcid": "4420" 00:17:08.861 }, 00:17:08.861 "peer_address": { 00:17:08.861 "trtype": "TCP", 00:17:08.861 "adrfam": "IPv4", 00:17:08.861 "traddr": "10.0.0.1", 00:17:08.861 "trsvcid": "50958" 00:17:08.861 }, 00:17:08.861 "auth": { 00:17:08.861 "state": "completed", 00:17:08.861 "digest": "sha384", 00:17:08.861 "dhgroup": "ffdhe4096" 00:17:08.861 } 00:17:08.861 } 00:17:08.861 ]' 00:17:08.861 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.861 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.861 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.861 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.861 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.121 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.121 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.121 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.121 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:09.121 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.063 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.323 00:17:10.323 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.323 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.323 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.584 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.584 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.584 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.584 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.584 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.584 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.584 { 00:17:10.584 "cntlid": 75, 00:17:10.584 "qid": 0, 00:17:10.584 "state": "enabled", 00:17:10.584 "thread": "nvmf_tgt_poll_group_000", 00:17:10.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.584 "listen_address": { 00:17:10.584 "trtype": "TCP", 00:17:10.584 "adrfam": "IPv4", 00:17:10.584 "traddr": "10.0.0.2", 00:17:10.584 "trsvcid": "4420" 00:17:10.584 }, 00:17:10.584 "peer_address": { 00:17:10.584 "trtype": "TCP", 00:17:10.584 "adrfam": "IPv4", 00:17:10.584 "traddr": "10.0.0.1", 00:17:10.584 "trsvcid": "52912" 00:17:10.584 }, 00:17:10.584 "auth": { 00:17:10.584 "state": "completed", 00:17:10.584 "digest": "sha384", 00:17:10.584 "dhgroup": "ffdhe4096" 00:17:10.584 } 00:17:10.584 } 00:17:10.584 ]' 00:17:10.584 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.584 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.584 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.584 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.584 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.584 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.584 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.584 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.846 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:10.846 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:11.417 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.417 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.417 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.417 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.417 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.417 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.417 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.417 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.678 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.938 00:17:11.939 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.939 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.939 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.200 { 00:17:12.200 "cntlid": 77, 00:17:12.200 "qid": 0, 00:17:12.200 "state": "enabled", 00:17:12.200 "thread": "nvmf_tgt_poll_group_000", 00:17:12.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.200 "listen_address": { 00:17:12.200 "trtype": "TCP", 00:17:12.200 "adrfam": "IPv4", 00:17:12.200 "traddr": "10.0.0.2", 00:17:12.200 "trsvcid": "4420" 00:17:12.200 }, 00:17:12.200 "peer_address": { 00:17:12.200 "trtype": "TCP", 00:17:12.200 "adrfam": "IPv4", 00:17:12.200 "traddr": "10.0.0.1", 00:17:12.200 "trsvcid": "52948" 00:17:12.200 }, 00:17:12.200 "auth": { 00:17:12.200 "state": "completed", 00:17:12.200 "digest": "sha384", 00:17:12.200 "dhgroup": "ffdhe4096" 00:17:12.200 } 00:17:12.200 } 00:17:12.200 ]' 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.200 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.461 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:12.461 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:13.032 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.032 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.032 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.032 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.032 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.032 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.032 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.032 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.292 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:13.292 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.292 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.293 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:13.293 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.293 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.293 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:13.293 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.293 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.293 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.293 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.293 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.293 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.553 00:17:13.553 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.553 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.553 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.816 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.816 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.816 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.816 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.816 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.816 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.816 { 00:17:13.816 "cntlid": 79, 00:17:13.816 "qid": 0, 00:17:13.816 "state": "enabled", 00:17:13.816 "thread": "nvmf_tgt_poll_group_000", 00:17:13.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.816 "listen_address": { 00:17:13.816 "trtype": "TCP", 00:17:13.816 "adrfam": "IPv4", 00:17:13.816 "traddr": "10.0.0.2", 00:17:13.816 "trsvcid": "4420" 00:17:13.816 }, 00:17:13.816 "peer_address": { 00:17:13.816 "trtype": "TCP", 00:17:13.816 "adrfam": "IPv4", 00:17:13.816 "traddr": "10.0.0.1", 00:17:13.816 "trsvcid": "52972" 00:17:13.816 }, 00:17:13.816 "auth": { 00:17:13.816 "state": "completed", 00:17:13.816 "digest": "sha384", 00:17:13.816 "dhgroup": "ffdhe4096" 00:17:13.817 } 00:17:13.817 } 00:17:13.817 ]' 00:17:13.817 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.817 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.817 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.817 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.817 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.817 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.817 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.817 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.081 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:14.081 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:14.654 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.654 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.654 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.654 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.654 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.654 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.654 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.654 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:14.654 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.915 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.176 00:17:15.436 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.436 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.436 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.436 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.436 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.436 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.436 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.436 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.436 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.436 { 00:17:15.436 "cntlid": 81, 00:17:15.436 "qid": 0, 00:17:15.436 "state": "enabled", 00:17:15.436 "thread": "nvmf_tgt_poll_group_000", 00:17:15.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.436 "listen_address": { 00:17:15.436 "trtype": "TCP", 00:17:15.436 "adrfam": "IPv4", 00:17:15.436 "traddr": "10.0.0.2", 00:17:15.436 "trsvcid": "4420" 00:17:15.436 }, 00:17:15.436 "peer_address": { 00:17:15.436 "trtype": "TCP", 00:17:15.436 "adrfam": "IPv4", 00:17:15.436 "traddr": "10.0.0.1", 00:17:15.436 "trsvcid": "52996" 00:17:15.436 }, 00:17:15.436 "auth": { 00:17:15.436 "state": "completed", 00:17:15.436 "digest": "sha384", 00:17:15.436 "dhgroup": "ffdhe6144" 00:17:15.436 } 00:17:15.437 } 00:17:15.437 ]' 00:17:15.437 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.697 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.697 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.697 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.697 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.697 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.697 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.697 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.958 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:15.958 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:16.529 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.529 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.529 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.529 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.529 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.529 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.529 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.529 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.789 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.049 00:17:17.049 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.049 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.049 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.309 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.309 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.309 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.310 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.310 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.310 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.310 { 00:17:17.310 "cntlid": 83, 00:17:17.310 "qid": 0, 00:17:17.310 "state": "enabled", 00:17:17.310 "thread": "nvmf_tgt_poll_group_000", 00:17:17.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.310 "listen_address": { 00:17:17.310 "trtype": "TCP", 00:17:17.310 "adrfam": "IPv4", 00:17:17.310 "traddr": "10.0.0.2", 00:17:17.310 "trsvcid": "4420" 00:17:17.310 }, 00:17:17.310 "peer_address": { 00:17:17.310 "trtype": "TCP", 00:17:17.310 "adrfam": "IPv4", 00:17:17.310 "traddr": "10.0.0.1", 00:17:17.310 "trsvcid": "53020" 00:17:17.310 }, 00:17:17.310 "auth": { 00:17:17.310 "state": "completed", 00:17:17.310 "digest": "sha384", 00:17:17.310 "dhgroup": "ffdhe6144" 00:17:17.310 } 00:17:17.310 } 00:17:17.310 ]' 00:17:17.310 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.310 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.310 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.310 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:17.310 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.310 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.310 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.310 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.570 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:17.570 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:18.141 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.141 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.141 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.141 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.141 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.141 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.141 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.141 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.401 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.661 00:17:18.661 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.661 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.662 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.922 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.922 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.922 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.922 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.922 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.922 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.922 { 00:17:18.922 "cntlid": 85, 00:17:18.922 "qid": 0, 00:17:18.922 "state": "enabled", 00:17:18.922 "thread": "nvmf_tgt_poll_group_000", 00:17:18.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.922 "listen_address": { 00:17:18.922 "trtype": "TCP", 00:17:18.922 "adrfam": "IPv4", 00:17:18.922 "traddr": "10.0.0.2", 00:17:18.922 "trsvcid": "4420" 00:17:18.922 }, 00:17:18.922 "peer_address": { 00:17:18.922 "trtype": "TCP", 00:17:18.922 "adrfam": "IPv4", 00:17:18.922 "traddr": "10.0.0.1", 00:17:18.922 "trsvcid": "53054" 00:17:18.922 }, 00:17:18.922 "auth": { 00:17:18.922 "state": "completed", 00:17:18.922 "digest": "sha384", 00:17:18.922 "dhgroup": "ffdhe6144" 00:17:18.922 } 00:17:18.922 } 00:17:18.922 ]' 00:17:18.922 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.922 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.922 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.922 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.922 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.182 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.182 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.182 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.182 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:19.182 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.124 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.384 00:17:20.384 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.385 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.385 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.645 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.645 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.645 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.645 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.645 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.645 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.645 { 00:17:20.645 "cntlid": 87, 00:17:20.645 "qid": 0, 00:17:20.645 "state": "enabled", 00:17:20.645 "thread": "nvmf_tgt_poll_group_000", 00:17:20.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.645 "listen_address": { 00:17:20.645 "trtype": "TCP", 00:17:20.645 "adrfam": "IPv4", 00:17:20.645 "traddr": "10.0.0.2", 00:17:20.645 "trsvcid": "4420" 00:17:20.645 }, 00:17:20.645 "peer_address": { 00:17:20.645 "trtype": "TCP", 00:17:20.645 "adrfam": "IPv4", 00:17:20.645 "traddr": "10.0.0.1", 00:17:20.645 "trsvcid": "44308" 00:17:20.645 }, 00:17:20.645 "auth": { 00:17:20.645 "state": "completed", 00:17:20.645 "digest": "sha384", 00:17:20.645 "dhgroup": "ffdhe6144" 00:17:20.645 } 00:17:20.645 } 00:17:20.645 ]' 00:17:20.645 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.645 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.645 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.645 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.646 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.906 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.906 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.906 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.906 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:20.906 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:21.478 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.738 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.308 00:17:22.308 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.308 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.308 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.569 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.569 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.569 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.569 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.569 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.569 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.569 { 00:17:22.569 "cntlid": 89, 00:17:22.569 "qid": 0, 00:17:22.569 "state": "enabled", 00:17:22.569 "thread": "nvmf_tgt_poll_group_000", 00:17:22.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.569 "listen_address": { 00:17:22.569 "trtype": "TCP", 00:17:22.569 "adrfam": "IPv4", 00:17:22.569 "traddr": "10.0.0.2", 00:17:22.569 "trsvcid": "4420" 00:17:22.569 }, 00:17:22.569 "peer_address": { 00:17:22.569 "trtype": "TCP", 00:17:22.569 "adrfam": "IPv4", 00:17:22.569 "traddr": "10.0.0.1", 00:17:22.569 "trsvcid": "44330" 00:17:22.569 }, 00:17:22.569 "auth": { 00:17:22.569 "state": "completed", 00:17:22.569 "digest": "sha384", 00:17:22.569 "dhgroup": "ffdhe8192" 00:17:22.569 } 00:17:22.569 } 00:17:22.569 ]' 00:17:22.569 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.569 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.569 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.569 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:22.569 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.569 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.569 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.569 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.829 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:22.829 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:23.400 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.400 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.400 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.400 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.400 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.400 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.400 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.400 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.660 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.231 00:17:24.231 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.231 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.231 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.231 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.231 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.231 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.231 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.231 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.231 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.231 { 00:17:24.231 "cntlid": 91, 00:17:24.231 "qid": 0, 00:17:24.231 "state": "enabled", 00:17:24.231 "thread": "nvmf_tgt_poll_group_000", 00:17:24.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.231 "listen_address": { 00:17:24.231 "trtype": "TCP", 00:17:24.231 "adrfam": "IPv4", 00:17:24.231 "traddr": "10.0.0.2", 00:17:24.231 "trsvcid": "4420" 00:17:24.231 }, 00:17:24.231 "peer_address": { 00:17:24.231 "trtype": "TCP", 00:17:24.231 "adrfam": "IPv4", 00:17:24.231 "traddr": "10.0.0.1", 00:17:24.231 "trsvcid": "44360" 00:17:24.231 }, 00:17:24.231 "auth": { 00:17:24.231 "state": "completed", 00:17:24.231 "digest": "sha384", 00:17:24.231 "dhgroup": "ffdhe8192" 00:17:24.231 } 00:17:24.231 } 00:17:24.231 ]' 00:17:24.231 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.492 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.492 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.492 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.492 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.492 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.492 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.492 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.753 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:24.753 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:25.324 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.325 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.325 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.325 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.325 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.325 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.325 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.325 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.587 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.848 00:17:26.108 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.109 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.109 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.109 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.109 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.109 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.109 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.109 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.109 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.109 { 00:17:26.109 "cntlid": 93, 00:17:26.109 "qid": 0, 00:17:26.109 "state": "enabled", 00:17:26.109 "thread": "nvmf_tgt_poll_group_000", 00:17:26.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.109 "listen_address": { 00:17:26.109 "trtype": "TCP", 00:17:26.109 "adrfam": "IPv4", 00:17:26.109 "traddr": "10.0.0.2", 00:17:26.109 "trsvcid": "4420" 00:17:26.109 }, 00:17:26.109 "peer_address": { 00:17:26.109 "trtype": "TCP", 00:17:26.109 "adrfam": "IPv4", 00:17:26.109 "traddr": "10.0.0.1", 00:17:26.109 "trsvcid": "44388" 00:17:26.109 }, 00:17:26.109 "auth": { 00:17:26.109 "state": "completed", 00:17:26.109 "digest": "sha384", 00:17:26.109 "dhgroup": "ffdhe8192" 00:17:26.109 } 00:17:26.109 } 00:17:26.109 ]' 00:17:26.109 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.370 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.370 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.370 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.370 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.370 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.370 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.370 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.630 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:26.630 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.201 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.772 00:17:27.772 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.772 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.772 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.033 { 00:17:28.033 "cntlid": 95, 00:17:28.033 "qid": 0, 00:17:28.033 "state": "enabled", 00:17:28.033 "thread": "nvmf_tgt_poll_group_000", 00:17:28.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.033 "listen_address": { 00:17:28.033 "trtype": "TCP", 00:17:28.033 "adrfam": "IPv4", 00:17:28.033 "traddr": "10.0.0.2", 00:17:28.033 "trsvcid": "4420" 00:17:28.033 }, 00:17:28.033 "peer_address": { 00:17:28.033 "trtype": "TCP", 00:17:28.033 "adrfam": "IPv4", 00:17:28.033 "traddr": "10.0.0.1", 00:17:28.033 "trsvcid": "44408" 00:17:28.033 }, 00:17:28.033 "auth": { 00:17:28.033 "state": "completed", 00:17:28.033 "digest": "sha384", 00:17:28.033 "dhgroup": "ffdhe8192" 00:17:28.033 } 00:17:28.033 } 00:17:28.033 ]' 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.033 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.294 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:28.294 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:28.865 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.865 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.865 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.865 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.865 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.865 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:28.865 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.865 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.865 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:28.865 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.125 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.386 00:17:29.386 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.386 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.386 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.646 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.646 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.646 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.646 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.646 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.646 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.646 { 00:17:29.646 "cntlid": 97, 00:17:29.646 "qid": 0, 00:17:29.646 "state": "enabled", 00:17:29.646 "thread": "nvmf_tgt_poll_group_000", 00:17:29.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.646 "listen_address": { 00:17:29.646 "trtype": "TCP", 00:17:29.646 "adrfam": "IPv4", 00:17:29.646 "traddr": "10.0.0.2", 00:17:29.646 "trsvcid": "4420" 00:17:29.646 }, 00:17:29.646 "peer_address": { 00:17:29.646 "trtype": "TCP", 00:17:29.646 "adrfam": "IPv4", 00:17:29.646 "traddr": "10.0.0.1", 00:17:29.646 "trsvcid": "43554" 00:17:29.646 }, 00:17:29.646 "auth": { 00:17:29.646 "state": "completed", 00:17:29.646 "digest": "sha512", 00:17:29.646 "dhgroup": "null" 00:17:29.646 } 00:17:29.646 } 00:17:29.646 ]' 00:17:29.646 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.646 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.646 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.646 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:29.646 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.646 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.646 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.646 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.906 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:29.906 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:30.478 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.478 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.478 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.478 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.478 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.478 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.478 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.478 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.739 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.000 00:17:31.000 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.000 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.000 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.261 { 00:17:31.261 "cntlid": 99, 00:17:31.261 "qid": 0, 00:17:31.261 "state": "enabled", 00:17:31.261 "thread": "nvmf_tgt_poll_group_000", 00:17:31.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.261 "listen_address": { 00:17:31.261 "trtype": "TCP", 00:17:31.261 "adrfam": "IPv4", 00:17:31.261 "traddr": "10.0.0.2", 00:17:31.261 "trsvcid": "4420" 00:17:31.261 }, 00:17:31.261 "peer_address": { 00:17:31.261 "trtype": "TCP", 00:17:31.261 "adrfam": "IPv4", 00:17:31.261 "traddr": "10.0.0.1", 00:17:31.261 "trsvcid": "43586" 00:17:31.261 }, 00:17:31.261 "auth": { 00:17:31.261 "state": "completed", 00:17:31.261 "digest": "sha512", 00:17:31.261 "dhgroup": "null" 00:17:31.261 } 00:17:31.261 } 00:17:31.261 ]' 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.261 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.523 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:31.523 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:32.094 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.094 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.094 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.094 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.094 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.094 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.094 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.094 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.355 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.615 00:17:32.615 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.615 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.615 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.615 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.615 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.615 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.615 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.615 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.615 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.615 { 00:17:32.615 "cntlid": 101, 00:17:32.615 "qid": 0, 00:17:32.615 "state": "enabled", 00:17:32.615 "thread": "nvmf_tgt_poll_group_000", 00:17:32.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.615 "listen_address": { 00:17:32.615 "trtype": "TCP", 00:17:32.615 "adrfam": "IPv4", 00:17:32.615 "traddr": "10.0.0.2", 00:17:32.615 "trsvcid": "4420" 00:17:32.615 }, 00:17:32.615 "peer_address": { 00:17:32.615 "trtype": "TCP", 00:17:32.615 "adrfam": "IPv4", 00:17:32.615 "traddr": "10.0.0.1", 00:17:32.615 "trsvcid": "43622" 00:17:32.615 }, 00:17:32.615 "auth": { 00:17:32.615 "state": "completed", 00:17:32.615 "digest": "sha512", 00:17:32.615 "dhgroup": "null" 00:17:32.615 } 00:17:32.615 } 00:17:32.615 ]' 00:17:32.615 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.875 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.875 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.875 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:32.875 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.875 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.875 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.876 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.136 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:33.136 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:33.709 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.709 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.709 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.709 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.709 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.709 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.709 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.709 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.968 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.229 00:17:34.229 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.229 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.229 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.229 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.229 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.229 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.229 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.229 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.229 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.229 { 00:17:34.229 "cntlid": 103, 00:17:34.229 "qid": 0, 00:17:34.229 "state": "enabled", 00:17:34.229 "thread": "nvmf_tgt_poll_group_000", 00:17:34.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.229 "listen_address": { 00:17:34.229 "trtype": "TCP", 00:17:34.229 "adrfam": "IPv4", 00:17:34.229 "traddr": "10.0.0.2", 00:17:34.229 "trsvcid": "4420" 00:17:34.229 }, 00:17:34.229 "peer_address": { 00:17:34.229 "trtype": "TCP", 00:17:34.229 "adrfam": "IPv4", 00:17:34.229 "traddr": "10.0.0.1", 00:17:34.229 "trsvcid": "43638" 00:17:34.229 }, 00:17:34.229 "auth": { 00:17:34.229 "state": "completed", 00:17:34.229 "digest": "sha512", 00:17:34.229 "dhgroup": "null" 00:17:34.229 } 00:17:34.229 } 00:17:34.229 ]' 00:17:34.229 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.490 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.490 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.490 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:34.490 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.490 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.490 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.490 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.751 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:34.751 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:35.323 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.323 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.323 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.323 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.323 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.323 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.323 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.323 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.323 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.585 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.845 00:17:35.846 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.846 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.846 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.846 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.846 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.846 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.846 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.846 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.846 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.846 { 00:17:35.846 "cntlid": 105, 00:17:35.846 "qid": 0, 00:17:35.846 "state": "enabled", 00:17:35.846 "thread": "nvmf_tgt_poll_group_000", 00:17:35.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.846 "listen_address": { 00:17:35.846 "trtype": "TCP", 00:17:35.846 "adrfam": "IPv4", 00:17:35.846 "traddr": "10.0.0.2", 00:17:35.846 "trsvcid": "4420" 00:17:35.846 }, 00:17:35.846 "peer_address": { 00:17:35.846 "trtype": "TCP", 00:17:35.846 "adrfam": "IPv4", 00:17:35.846 "traddr": "10.0.0.1", 00:17:35.846 "trsvcid": "43670" 00:17:35.846 }, 00:17:35.846 "auth": { 00:17:35.846 "state": "completed", 00:17:35.846 "digest": "sha512", 00:17:35.846 "dhgroup": "ffdhe2048" 00:17:35.846 } 00:17:35.846 } 00:17:35.846 ]' 00:17:35.846 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.107 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.107 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.107 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.107 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.107 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.107 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.107 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.367 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:36.368 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:36.939 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.939 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.939 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.939 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.939 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.939 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.939 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.939 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.200 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.201 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.462 00:17:37.462 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.462 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.462 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.462 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.462 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.462 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.462 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.462 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.462 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.462 { 00:17:37.462 "cntlid": 107, 00:17:37.462 "qid": 0, 00:17:37.462 "state": "enabled", 00:17:37.462 "thread": "nvmf_tgt_poll_group_000", 00:17:37.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.462 "listen_address": { 00:17:37.462 "trtype": "TCP", 00:17:37.462 "adrfam": "IPv4", 00:17:37.462 "traddr": "10.0.0.2", 00:17:37.462 "trsvcid": "4420" 00:17:37.462 }, 00:17:37.462 "peer_address": { 00:17:37.462 "trtype": "TCP", 00:17:37.462 "adrfam": "IPv4", 00:17:37.462 "traddr": "10.0.0.1", 00:17:37.462 "trsvcid": "43690" 00:17:37.462 }, 00:17:37.462 "auth": { 00:17:37.462 "state": "completed", 00:17:37.462 "digest": "sha512", 00:17:37.462 "dhgroup": "ffdhe2048" 00:17:37.462 } 00:17:37.462 } 00:17:37.462 ]' 00:17:37.462 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.723 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.723 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.723 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.723 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.723 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.723 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.723 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.984 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:37.984 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:38.610 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.610 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.610 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.610 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.610 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.610 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.610 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.610 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.922 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.922 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.211 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.211 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.211 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.211 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.211 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.211 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.211 { 00:17:39.211 "cntlid": 109, 00:17:39.211 "qid": 0, 00:17:39.211 "state": "enabled", 00:17:39.211 "thread": "nvmf_tgt_poll_group_000", 00:17:39.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.211 "listen_address": { 00:17:39.211 "trtype": "TCP", 00:17:39.211 "adrfam": "IPv4", 00:17:39.211 "traddr": "10.0.0.2", 00:17:39.211 "trsvcid": "4420" 00:17:39.211 }, 00:17:39.211 "peer_address": { 00:17:39.211 "trtype": "TCP", 00:17:39.211 "adrfam": "IPv4", 00:17:39.211 "traddr": "10.0.0.1", 00:17:39.211 "trsvcid": "43720" 00:17:39.211 }, 00:17:39.211 "auth": { 00:17:39.211 "state": "completed", 00:17:39.211 "digest": "sha512", 00:17:39.211 "dhgroup": "ffdhe2048" 00:17:39.211 } 00:17:39.211 } 00:17:39.211 ]' 00:17:39.211 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.211 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.211 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.211 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.472 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.472 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.472 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.472 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.472 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:39.472 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.412 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.673 00:17:40.673 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.673 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.673 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.933 { 00:17:40.933 "cntlid": 111, 00:17:40.933 "qid": 0, 00:17:40.933 "state": "enabled", 00:17:40.933 "thread": "nvmf_tgt_poll_group_000", 00:17:40.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.933 "listen_address": { 00:17:40.933 "trtype": "TCP", 00:17:40.933 "adrfam": "IPv4", 00:17:40.933 "traddr": "10.0.0.2", 00:17:40.933 "trsvcid": "4420" 00:17:40.933 }, 00:17:40.933 "peer_address": { 00:17:40.933 "trtype": "TCP", 00:17:40.933 "adrfam": "IPv4", 00:17:40.933 "traddr": "10.0.0.1", 00:17:40.933 "trsvcid": "58558" 00:17:40.933 }, 00:17:40.933 "auth": { 00:17:40.933 "state": "completed", 00:17:40.933 "digest": "sha512", 00:17:40.933 "dhgroup": "ffdhe2048" 00:17:40.933 } 00:17:40.933 } 00:17:40.933 ]' 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.933 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.194 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:41.194 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:41.764 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.764 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.764 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.764 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.764 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.764 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.764 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.764 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:41.764 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.024 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.285 00:17:42.285 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.285 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.285 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.545 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.545 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.545 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.545 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.545 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.545 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.545 { 00:17:42.545 "cntlid": 113, 00:17:42.545 "qid": 0, 00:17:42.545 "state": "enabled", 00:17:42.545 "thread": "nvmf_tgt_poll_group_000", 00:17:42.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.545 "listen_address": { 00:17:42.545 "trtype": "TCP", 00:17:42.545 "adrfam": "IPv4", 00:17:42.545 "traddr": "10.0.0.2", 00:17:42.545 "trsvcid": "4420" 00:17:42.545 }, 00:17:42.545 "peer_address": { 00:17:42.545 "trtype": "TCP", 00:17:42.545 "adrfam": "IPv4", 00:17:42.545 "traddr": "10.0.0.1", 00:17:42.545 "trsvcid": "58584" 00:17:42.545 }, 00:17:42.545 "auth": { 00:17:42.545 "state": "completed", 00:17:42.545 "digest": "sha512", 00:17:42.545 "dhgroup": "ffdhe3072" 00:17:42.545 } 00:17:42.545 } 00:17:42.545 ]' 00:17:42.545 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.545 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.545 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.545 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.545 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.545 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.545 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.545 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.805 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:42.805 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:43.376 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.376 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.376 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.376 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.376 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.376 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.376 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.376 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.637 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.896 00:17:43.896 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.896 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.896 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.157 { 00:17:44.157 "cntlid": 115, 00:17:44.157 "qid": 0, 00:17:44.157 "state": "enabled", 00:17:44.157 "thread": "nvmf_tgt_poll_group_000", 00:17:44.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.157 "listen_address": { 00:17:44.157 "trtype": "TCP", 00:17:44.157 "adrfam": "IPv4", 00:17:44.157 "traddr": "10.0.0.2", 00:17:44.157 "trsvcid": "4420" 00:17:44.157 }, 00:17:44.157 "peer_address": { 00:17:44.157 "trtype": "TCP", 00:17:44.157 "adrfam": "IPv4", 00:17:44.157 "traddr": "10.0.0.1", 00:17:44.157 "trsvcid": "58596" 00:17:44.157 }, 00:17:44.157 "auth": { 00:17:44.157 "state": "completed", 00:17:44.157 "digest": "sha512", 00:17:44.157 "dhgroup": "ffdhe3072" 00:17:44.157 } 00:17:44.157 } 00:17:44.157 ]' 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.157 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.417 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:44.417 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:44.987 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.987 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.987 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.987 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.987 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.987 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.987 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:44.987 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.247 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:45.247 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.248 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.248 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.248 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.248 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.248 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.248 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.248 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.248 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.248 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.248 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.248 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.509 00:17:45.509 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.509 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.509 09:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.770 { 00:17:45.770 "cntlid": 117, 00:17:45.770 "qid": 0, 00:17:45.770 "state": "enabled", 00:17:45.770 "thread": "nvmf_tgt_poll_group_000", 00:17:45.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.770 "listen_address": { 00:17:45.770 "trtype": "TCP", 00:17:45.770 "adrfam": "IPv4", 00:17:45.770 "traddr": "10.0.0.2", 00:17:45.770 "trsvcid": "4420" 00:17:45.770 }, 00:17:45.770 "peer_address": { 00:17:45.770 "trtype": "TCP", 00:17:45.770 "adrfam": "IPv4", 00:17:45.770 "traddr": "10.0.0.1", 00:17:45.770 "trsvcid": "58636" 00:17:45.770 }, 00:17:45.770 "auth": { 00:17:45.770 "state": "completed", 00:17:45.770 "digest": "sha512", 00:17:45.770 "dhgroup": "ffdhe3072" 00:17:45.770 } 00:17:45.770 } 00:17:45.770 ]' 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.770 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.031 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:46.031 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:46.602 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.602 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.602 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.602 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.602 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.602 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.602 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.602 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.862 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.123 00:17:47.123 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.123 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.123 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.384 { 00:17:47.384 "cntlid": 119, 00:17:47.384 "qid": 0, 00:17:47.384 "state": "enabled", 00:17:47.384 "thread": "nvmf_tgt_poll_group_000", 00:17:47.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.384 "listen_address": { 00:17:47.384 "trtype": "TCP", 00:17:47.384 "adrfam": "IPv4", 00:17:47.384 "traddr": "10.0.0.2", 00:17:47.384 "trsvcid": "4420" 00:17:47.384 }, 00:17:47.384 "peer_address": { 00:17:47.384 "trtype": "TCP", 00:17:47.384 "adrfam": "IPv4", 00:17:47.384 "traddr": "10.0.0.1", 00:17:47.384 "trsvcid": "58660" 00:17:47.384 }, 00:17:47.384 "auth": { 00:17:47.384 "state": "completed", 00:17:47.384 "digest": "sha512", 00:17:47.384 "dhgroup": "ffdhe3072" 00:17:47.384 } 00:17:47.384 } 00:17:47.384 ]' 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.384 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.645 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:47.645 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:48.217 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.217 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.217 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.217 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.217 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.217 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.217 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.217 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.217 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.477 09:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.737 00:17:48.737 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.737 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.737 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.998 { 00:17:48.998 "cntlid": 121, 00:17:48.998 "qid": 0, 00:17:48.998 "state": "enabled", 00:17:48.998 "thread": "nvmf_tgt_poll_group_000", 00:17:48.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.998 "listen_address": { 00:17:48.998 "trtype": "TCP", 00:17:48.998 "adrfam": "IPv4", 00:17:48.998 "traddr": "10.0.0.2", 00:17:48.998 "trsvcid": "4420" 00:17:48.998 }, 00:17:48.998 "peer_address": { 00:17:48.998 "trtype": "TCP", 00:17:48.998 "adrfam": "IPv4", 00:17:48.998 "traddr": "10.0.0.1", 00:17:48.998 "trsvcid": "58698" 00:17:48.998 }, 00:17:48.998 "auth": { 00:17:48.998 "state": "completed", 00:17:48.998 "digest": "sha512", 00:17:48.998 "dhgroup": "ffdhe4096" 00:17:48.998 } 00:17:48.998 } 00:17:48.998 ]' 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.998 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.259 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:49.259 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:49.831 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.832 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.832 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.832 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.832 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.832 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.832 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.832 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.092 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.353 00:17:50.353 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.353 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.353 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.618 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.618 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.618 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.618 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.618 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.618 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.618 { 00:17:50.618 "cntlid": 123, 00:17:50.618 "qid": 0, 00:17:50.618 "state": "enabled", 00:17:50.618 "thread": "nvmf_tgt_poll_group_000", 00:17:50.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.618 "listen_address": { 00:17:50.618 "trtype": "TCP", 00:17:50.618 "adrfam": "IPv4", 00:17:50.618 "traddr": "10.0.0.2", 00:17:50.618 "trsvcid": "4420" 00:17:50.618 }, 00:17:50.618 "peer_address": { 00:17:50.618 "trtype": "TCP", 00:17:50.618 "adrfam": "IPv4", 00:17:50.618 "traddr": "10.0.0.1", 00:17:50.618 "trsvcid": "59410" 00:17:50.618 }, 00:17:50.618 "auth": { 00:17:50.618 "state": "completed", 00:17:50.618 "digest": "sha512", 00:17:50.618 "dhgroup": "ffdhe4096" 00:17:50.618 } 00:17:50.618 } 00:17:50.618 ]' 00:17:50.618 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.618 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.618 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.618 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.618 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.618 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.618 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.618 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.879 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:50.879 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:51.450 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.717 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.717 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.717 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.717 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.717 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.717 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.717 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.717 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.977 00:17:51.977 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.977 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.977 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.238 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.238 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.238 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.238 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.238 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.238 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.238 { 00:17:52.238 "cntlid": 125, 00:17:52.238 "qid": 0, 00:17:52.238 "state": "enabled", 00:17:52.238 "thread": "nvmf_tgt_poll_group_000", 00:17:52.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.238 "listen_address": { 00:17:52.238 "trtype": "TCP", 00:17:52.238 "adrfam": "IPv4", 00:17:52.238 "traddr": "10.0.0.2", 00:17:52.238 "trsvcid": "4420" 00:17:52.238 }, 00:17:52.238 "peer_address": { 00:17:52.238 "trtype": "TCP", 00:17:52.238 "adrfam": "IPv4", 00:17:52.238 "traddr": "10.0.0.1", 00:17:52.238 "trsvcid": "59444" 00:17:52.238 }, 00:17:52.238 "auth": { 00:17:52.238 "state": "completed", 00:17:52.238 "digest": "sha512", 00:17:52.238 "dhgroup": "ffdhe4096" 00:17:52.238 } 00:17:52.238 } 00:17:52.238 ]' 00:17:52.238 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.238 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.238 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.238 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.238 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.499 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.499 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.499 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.499 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:52.499 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.441 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.701 00:17:53.701 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.701 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.701 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.961 { 00:17:53.961 "cntlid": 127, 00:17:53.961 "qid": 0, 00:17:53.961 "state": "enabled", 00:17:53.961 "thread": "nvmf_tgt_poll_group_000", 00:17:53.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.961 "listen_address": { 00:17:53.961 "trtype": "TCP", 00:17:53.961 "adrfam": "IPv4", 00:17:53.961 "traddr": "10.0.0.2", 00:17:53.961 "trsvcid": "4420" 00:17:53.961 }, 00:17:53.961 "peer_address": { 00:17:53.961 "trtype": "TCP", 00:17:53.961 "adrfam": "IPv4", 00:17:53.961 "traddr": "10.0.0.1", 00:17:53.961 "trsvcid": "59470" 00:17:53.961 }, 00:17:53.961 "auth": { 00:17:53.961 "state": "completed", 00:17:53.961 "digest": "sha512", 00:17:53.961 "dhgroup": "ffdhe4096" 00:17:53.961 } 00:17:53.961 } 00:17:53.961 ]' 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.961 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.222 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:54.222 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:17:54.794 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.794 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.794 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.794 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.794 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.794 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.794 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.794 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.794 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.054 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.314 00:17:55.314 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.314 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.314 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.574 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.574 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.574 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.574 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.574 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.574 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.574 { 00:17:55.574 "cntlid": 129, 00:17:55.574 "qid": 0, 00:17:55.574 "state": "enabled", 00:17:55.574 "thread": "nvmf_tgt_poll_group_000", 00:17:55.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.574 "listen_address": { 00:17:55.574 "trtype": "TCP", 00:17:55.574 "adrfam": "IPv4", 00:17:55.574 "traddr": "10.0.0.2", 00:17:55.574 "trsvcid": "4420" 00:17:55.574 }, 00:17:55.574 "peer_address": { 00:17:55.574 "trtype": "TCP", 00:17:55.574 "adrfam": "IPv4", 00:17:55.574 "traddr": "10.0.0.1", 00:17:55.574 "trsvcid": "59512" 00:17:55.574 }, 00:17:55.574 "auth": { 00:17:55.574 "state": "completed", 00:17:55.574 "digest": "sha512", 00:17:55.574 "dhgroup": "ffdhe6144" 00:17:55.574 } 00:17:55.574 } 00:17:55.574 ]' 00:17:55.574 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.575 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.575 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.575 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.575 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.835 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.835 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.835 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.835 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:55.835 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:17:56.406 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.667 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.667 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.667 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.667 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.667 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.667 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:56.667 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.667 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.930 00:17:57.190 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.190 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.190 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.190 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.190 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.190 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.190 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.190 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.190 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.190 { 00:17:57.190 "cntlid": 131, 00:17:57.190 "qid": 0, 00:17:57.190 "state": "enabled", 00:17:57.190 "thread": "nvmf_tgt_poll_group_000", 00:17:57.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.190 "listen_address": { 00:17:57.190 "trtype": "TCP", 00:17:57.190 "adrfam": "IPv4", 00:17:57.190 "traddr": "10.0.0.2", 00:17:57.190 "trsvcid": "4420" 00:17:57.190 }, 00:17:57.190 "peer_address": { 00:17:57.190 "trtype": "TCP", 00:17:57.190 "adrfam": "IPv4", 00:17:57.190 "traddr": "10.0.0.1", 00:17:57.190 "trsvcid": "59538" 00:17:57.190 }, 00:17:57.190 "auth": { 00:17:57.190 "state": "completed", 00:17:57.190 "digest": "sha512", 00:17:57.190 "dhgroup": "ffdhe6144" 00:17:57.190 } 00:17:57.190 } 00:17:57.190 ]' 00:17:57.190 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.450 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.450 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.450 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.450 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.450 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.450 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.450 09:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.711 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:57.711 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:17:58.282 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.282 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.282 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.282 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.282 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.282 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.282 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.282 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.543 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.804 00:17:58.804 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.804 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.804 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.065 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.065 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.065 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.065 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.065 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.065 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.065 { 00:17:59.065 "cntlid": 133, 00:17:59.065 "qid": 0, 00:17:59.065 "state": "enabled", 00:17:59.065 "thread": "nvmf_tgt_poll_group_000", 00:17:59.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.065 "listen_address": { 00:17:59.065 "trtype": "TCP", 00:17:59.065 "adrfam": "IPv4", 00:17:59.065 "traddr": "10.0.0.2", 00:17:59.065 "trsvcid": "4420" 00:17:59.065 }, 00:17:59.065 "peer_address": { 00:17:59.065 "trtype": "TCP", 00:17:59.065 "adrfam": "IPv4", 00:17:59.065 "traddr": "10.0.0.1", 00:17:59.065 "trsvcid": "59558" 00:17:59.065 }, 00:17:59.065 "auth": { 00:17:59.065 "state": "completed", 00:17:59.065 "digest": "sha512", 00:17:59.065 "dhgroup": "ffdhe6144" 00:17:59.065 } 00:17:59.065 } 00:17:59.065 ]' 00:17:59.065 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.065 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.065 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.326 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.326 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.326 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.326 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.326 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.586 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:17:59.586 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:18:00.157 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.157 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.157 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.157 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.157 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.157 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.157 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.157 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.420 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.682 00:18:00.682 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.682 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.682 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.944 { 00:18:00.944 "cntlid": 135, 00:18:00.944 "qid": 0, 00:18:00.944 "state": "enabled", 00:18:00.944 "thread": "nvmf_tgt_poll_group_000", 00:18:00.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.944 "listen_address": { 00:18:00.944 "trtype": "TCP", 00:18:00.944 "adrfam": "IPv4", 00:18:00.944 "traddr": "10.0.0.2", 00:18:00.944 "trsvcid": "4420" 00:18:00.944 }, 00:18:00.944 "peer_address": { 00:18:00.944 "trtype": "TCP", 00:18:00.944 "adrfam": "IPv4", 00:18:00.944 "traddr": "10.0.0.1", 00:18:00.944 "trsvcid": "51972" 00:18:00.944 }, 00:18:00.944 "auth": { 00:18:00.944 "state": "completed", 00:18:00.944 "digest": "sha512", 00:18:00.944 "dhgroup": "ffdhe6144" 00:18:00.944 } 00:18:00.944 } 00:18:00.944 ]' 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.944 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.205 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:18:01.205 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:18:01.776 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.776 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.776 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.776 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.776 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.776 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.776 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.776 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.776 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.050 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:02.050 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.050 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.050 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:02.050 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.050 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.051 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.051 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.051 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.051 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.051 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.051 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.051 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.626 00:18:02.626 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.626 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.626 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.626 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.626 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.626 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.626 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.626 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.626 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.626 { 00:18:02.626 "cntlid": 137, 00:18:02.626 "qid": 0, 00:18:02.626 "state": "enabled", 00:18:02.626 "thread": "nvmf_tgt_poll_group_000", 00:18:02.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.626 "listen_address": { 00:18:02.626 "trtype": "TCP", 00:18:02.626 "adrfam": "IPv4", 00:18:02.626 "traddr": "10.0.0.2", 00:18:02.626 "trsvcid": "4420" 00:18:02.626 }, 00:18:02.626 "peer_address": { 00:18:02.626 "trtype": "TCP", 00:18:02.626 "adrfam": "IPv4", 00:18:02.626 "traddr": "10.0.0.1", 00:18:02.626 "trsvcid": "52000" 00:18:02.626 }, 00:18:02.626 "auth": { 00:18:02.626 "state": "completed", 00:18:02.626 "digest": "sha512", 00:18:02.626 "dhgroup": "ffdhe8192" 00:18:02.626 } 00:18:02.626 } 00:18:02.626 ]' 00:18:02.626 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.626 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.626 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.887 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.887 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.887 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.887 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.887 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.149 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:18:03.149 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:18:03.722 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.722 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.722 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.722 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.722 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.722 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.722 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.722 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.982 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.243 00:18:04.243 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.243 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.243 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.504 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.504 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.504 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.504 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.504 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.504 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.504 { 00:18:04.504 "cntlid": 139, 00:18:04.504 "qid": 0, 00:18:04.504 "state": "enabled", 00:18:04.504 "thread": "nvmf_tgt_poll_group_000", 00:18:04.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.504 "listen_address": { 00:18:04.504 "trtype": "TCP", 00:18:04.504 "adrfam": "IPv4", 00:18:04.504 "traddr": "10.0.0.2", 00:18:04.504 "trsvcid": "4420" 00:18:04.504 }, 00:18:04.504 "peer_address": { 00:18:04.504 "trtype": "TCP", 00:18:04.504 "adrfam": "IPv4", 00:18:04.504 "traddr": "10.0.0.1", 00:18:04.504 "trsvcid": "52032" 00:18:04.504 }, 00:18:04.504 "auth": { 00:18:04.504 "state": "completed", 00:18:04.504 "digest": "sha512", 00:18:04.504 "dhgroup": "ffdhe8192" 00:18:04.504 } 00:18:04.504 } 00:18:04.504 ]' 00:18:04.504 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.504 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.504 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.504 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.504 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.766 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.766 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.766 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.766 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:18:04.766 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: --dhchap-ctrl-secret DHHC-1:02:NzY0ODQ1M2QxMzJlNTc2ZDcxMjcxMzMwMTQ5NDhiM2NhMTdhNTdkYmU2MDFjMTQ5ZyAlsg==: 00:18:05.708 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.708 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.708 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.708 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.708 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.708 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.708 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:05.708 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.708 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.281 00:18:06.281 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.281 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.281 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.281 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.281 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.281 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.281 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.281 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.281 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.281 { 00:18:06.281 "cntlid": 141, 00:18:06.281 "qid": 0, 00:18:06.281 "state": "enabled", 00:18:06.281 "thread": "nvmf_tgt_poll_group_000", 00:18:06.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.281 "listen_address": { 00:18:06.281 "trtype": "TCP", 00:18:06.281 "adrfam": "IPv4", 00:18:06.281 "traddr": "10.0.0.2", 00:18:06.281 "trsvcid": "4420" 00:18:06.281 }, 00:18:06.281 "peer_address": { 00:18:06.281 "trtype": "TCP", 00:18:06.281 "adrfam": "IPv4", 00:18:06.281 "traddr": "10.0.0.1", 00:18:06.281 "trsvcid": "52070" 00:18:06.281 }, 00:18:06.281 "auth": { 00:18:06.281 "state": "completed", 00:18:06.281 "digest": "sha512", 00:18:06.281 "dhgroup": "ffdhe8192" 00:18:06.281 } 00:18:06.281 } 00:18:06.281 ]' 00:18:06.281 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.542 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.542 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.542 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.542 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.542 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.542 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.542 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.804 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:18:06.804 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:01:YThiN2M0ZDA4YzkwMDE5NGFjYzhhOGU0YTg1ZDhmOTTid7cE: 00:18:07.375 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.375 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.375 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.375 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.375 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.375 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.375 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.375 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.636 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.896 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.156 { 00:18:08.156 "cntlid": 143, 00:18:08.156 "qid": 0, 00:18:08.156 "state": "enabled", 00:18:08.156 "thread": "nvmf_tgt_poll_group_000", 00:18:08.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.156 "listen_address": { 00:18:08.156 "trtype": "TCP", 00:18:08.156 "adrfam": "IPv4", 00:18:08.156 "traddr": "10.0.0.2", 00:18:08.156 "trsvcid": "4420" 00:18:08.156 }, 00:18:08.156 "peer_address": { 00:18:08.156 "trtype": "TCP", 00:18:08.156 "adrfam": "IPv4", 00:18:08.156 "traddr": "10.0.0.1", 00:18:08.156 "trsvcid": "52098" 00:18:08.156 }, 00:18:08.156 "auth": { 00:18:08.156 "state": "completed", 00:18:08.156 "digest": "sha512", 00:18:08.156 "dhgroup": "ffdhe8192" 00:18:08.156 } 00:18:08.156 } 00:18:08.156 ]' 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.156 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.416 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.416 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.416 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.416 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.416 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.416 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:18:08.416 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.369 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.370 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.940 00:18:09.940 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.940 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.940 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.940 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.940 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.940 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.940 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.940 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.940 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.940 { 00:18:09.940 "cntlid": 145, 00:18:09.940 "qid": 0, 00:18:09.940 "state": "enabled", 00:18:09.940 "thread": "nvmf_tgt_poll_group_000", 00:18:09.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.940 "listen_address": { 00:18:09.940 "trtype": "TCP", 00:18:09.940 "adrfam": "IPv4", 00:18:09.940 "traddr": "10.0.0.2", 00:18:09.940 "trsvcid": "4420" 00:18:09.940 }, 00:18:09.940 "peer_address": { 00:18:09.940 "trtype": "TCP", 00:18:09.940 "adrfam": "IPv4", 00:18:09.940 "traddr": "10.0.0.1", 00:18:09.940 "trsvcid": "54410" 00:18:09.940 }, 00:18:09.940 "auth": { 00:18:09.940 "state": "completed", 00:18:09.940 "digest": "sha512", 00:18:09.940 "dhgroup": "ffdhe8192" 00:18:09.940 } 00:18:09.940 } 00:18:09.940 ]' 00:18:09.940 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.201 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.201 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.201 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.201 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.201 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.201 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.201 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.460 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:18:10.460 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MmE3MmQxOTVmNDQ0N2RmMmY4ZmY0YWRjOTFkZTZlMTNmZjkzZjU3ZWQxOGYzODU15vVnOg==: --dhchap-ctrl-secret DHHC-1:03:N2JmNzRkY2Y3ZGE5MTZjN2MyNWRlMDdiYjUxY2FlN2U3NjExOWI3YmNjZmY3MzUzYzRlODc3ZDE3YzQ1NzEwZHeAsrM=: 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:11.030 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:11.602 request: 00:18:11.602 { 00:18:11.602 "name": "nvme0", 00:18:11.602 "trtype": "tcp", 00:18:11.602 "traddr": "10.0.0.2", 00:18:11.602 "adrfam": "ipv4", 00:18:11.602 "trsvcid": "4420", 00:18:11.602 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:11.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.602 "prchk_reftag": false, 00:18:11.602 "prchk_guard": false, 00:18:11.602 "hdgst": false, 00:18:11.602 "ddgst": false, 00:18:11.602 "dhchap_key": "key2", 00:18:11.602 "allow_unrecognized_csi": false, 00:18:11.602 "method": "bdev_nvme_attach_controller", 00:18:11.602 "req_id": 1 00:18:11.602 } 00:18:11.602 Got JSON-RPC error response 00:18:11.602 response: 00:18:11.602 { 00:18:11.602 "code": -5, 00:18:11.602 "message": "Input/output error" 00:18:11.602 } 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:11.602 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:11.862 request: 00:18:11.862 { 00:18:11.862 "name": "nvme0", 00:18:11.862 "trtype": "tcp", 00:18:11.862 "traddr": "10.0.0.2", 00:18:11.862 "adrfam": "ipv4", 00:18:11.862 "trsvcid": "4420", 00:18:11.862 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:11.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.862 "prchk_reftag": false, 00:18:11.862 "prchk_guard": false, 00:18:11.862 "hdgst": false, 00:18:11.862 "ddgst": false, 00:18:11.862 "dhchap_key": "key1", 00:18:11.862 "dhchap_ctrlr_key": "ckey2", 00:18:11.862 "allow_unrecognized_csi": false, 00:18:11.862 "method": "bdev_nvme_attach_controller", 00:18:11.862 "req_id": 1 00:18:11.862 } 00:18:11.862 Got JSON-RPC error response 00:18:11.862 response: 00:18:11.862 { 00:18:11.862 "code": -5, 00:18:11.862 "message": "Input/output error" 00:18:11.862 } 00:18:11.862 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:11.862 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.862 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.862 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.862 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.862 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.862 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.862 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.862 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.863 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.434 request: 00:18:12.434 { 00:18:12.434 "name": "nvme0", 00:18:12.434 "trtype": "tcp", 00:18:12.434 "traddr": "10.0.0.2", 00:18:12.434 "adrfam": "ipv4", 00:18:12.434 "trsvcid": "4420", 00:18:12.434 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.434 "prchk_reftag": false, 00:18:12.434 "prchk_guard": false, 00:18:12.434 "hdgst": false, 00:18:12.434 "ddgst": false, 00:18:12.434 "dhchap_key": "key1", 00:18:12.434 "dhchap_ctrlr_key": "ckey1", 00:18:12.434 "allow_unrecognized_csi": false, 00:18:12.434 "method": "bdev_nvme_attach_controller", 00:18:12.434 "req_id": 1 00:18:12.434 } 00:18:12.434 Got JSON-RPC error response 00:18:12.434 response: 00:18:12.434 { 00:18:12.434 "code": -5, 00:18:12.434 "message": "Input/output error" 00:18:12.434 } 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 663568 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 663568 ']' 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 663568 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 663568 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 663568' 00:18:12.434 killing process with pid 663568 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 663568 00:18:12.434 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 663568 00:18:12.695 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:12.695 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:12.696 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.696 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.696 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=689946 00:18:12.696 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 689946 00:18:12.696 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:12.696 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 689946 ']' 00:18:12.696 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.696 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.696 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.696 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.696 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 689946 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 689946 ']' 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.638 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.638 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.638 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:13.638 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:13.638 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.638 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.638 null0 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Elk 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.hAo ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hAo 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RWX 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.V7R ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.V7R 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ofo 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.1vB ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1vB 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SoB 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.899 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.900 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:13.900 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.900 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:13.900 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.900 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.900 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.900 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:13.900 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.900 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.840 nvme0n1 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.840 { 00:18:14.840 "cntlid": 1, 00:18:14.840 "qid": 0, 00:18:14.840 "state": "enabled", 00:18:14.840 "thread": "nvmf_tgt_poll_group_000", 00:18:14.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.840 "listen_address": { 00:18:14.840 "trtype": "TCP", 00:18:14.840 "adrfam": "IPv4", 00:18:14.840 "traddr": "10.0.0.2", 00:18:14.840 "trsvcid": "4420" 00:18:14.840 }, 00:18:14.840 "peer_address": { 00:18:14.840 "trtype": "TCP", 00:18:14.840 "adrfam": "IPv4", 00:18:14.840 "traddr": "10.0.0.1", 00:18:14.840 "trsvcid": "54454" 00:18:14.840 }, 00:18:14.840 "auth": { 00:18:14.840 "state": "completed", 00:18:14.840 "digest": "sha512", 00:18:14.840 "dhgroup": "ffdhe8192" 00:18:14.840 } 00:18:14.840 } 00:18:14.840 ]' 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.840 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.101 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:18:15.101 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:18:15.671 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.931 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.191 request: 00:18:16.191 { 00:18:16.191 "name": "nvme0", 00:18:16.191 "trtype": "tcp", 00:18:16.191 "traddr": "10.0.0.2", 00:18:16.191 "adrfam": "ipv4", 00:18:16.191 "trsvcid": "4420", 00:18:16.191 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:16.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.191 "prchk_reftag": false, 00:18:16.191 "prchk_guard": false, 00:18:16.191 "hdgst": false, 00:18:16.191 "ddgst": false, 00:18:16.191 "dhchap_key": "key3", 00:18:16.191 "allow_unrecognized_csi": false, 00:18:16.191 "method": "bdev_nvme_attach_controller", 00:18:16.191 "req_id": 1 00:18:16.191 } 00:18:16.191 Got JSON-RPC error response 00:18:16.191 response: 00:18:16.191 { 00:18:16.191 "code": -5, 00:18:16.191 "message": "Input/output error" 00:18:16.191 } 00:18:16.191 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:16.191 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.191 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.191 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.191 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:16.191 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:16.191 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:16.191 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.451 request: 00:18:16.451 { 00:18:16.451 "name": "nvme0", 00:18:16.451 "trtype": "tcp", 00:18:16.451 "traddr": "10.0.0.2", 00:18:16.451 "adrfam": "ipv4", 00:18:16.451 "trsvcid": "4420", 00:18:16.451 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:16.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.451 "prchk_reftag": false, 00:18:16.451 "prchk_guard": false, 00:18:16.451 "hdgst": false, 00:18:16.451 "ddgst": false, 00:18:16.451 "dhchap_key": "key3", 00:18:16.451 "allow_unrecognized_csi": false, 00:18:16.451 "method": "bdev_nvme_attach_controller", 00:18:16.451 "req_id": 1 00:18:16.451 } 00:18:16.451 Got JSON-RPC error response 00:18:16.451 response: 00:18:16.451 { 00:18:16.451 "code": -5, 00:18:16.451 "message": "Input/output error" 00:18:16.451 } 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:16.451 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.452 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.452 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.452 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:16.452 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:16.452 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:16.452 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:16.452 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:16.452 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:16.712 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:16.972 request: 00:18:16.972 { 00:18:16.972 "name": "nvme0", 00:18:16.972 "trtype": "tcp", 00:18:16.972 "traddr": "10.0.0.2", 00:18:16.972 "adrfam": "ipv4", 00:18:16.972 "trsvcid": "4420", 00:18:16.972 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:16.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.972 "prchk_reftag": false, 00:18:16.972 "prchk_guard": false, 00:18:16.972 "hdgst": false, 00:18:16.972 "ddgst": false, 00:18:16.972 "dhchap_key": "key0", 00:18:16.972 "dhchap_ctrlr_key": "key1", 00:18:16.972 "allow_unrecognized_csi": false, 00:18:16.972 "method": "bdev_nvme_attach_controller", 00:18:16.972 "req_id": 1 00:18:16.972 } 00:18:16.972 Got JSON-RPC error response 00:18:16.972 response: 00:18:16.972 { 00:18:16.972 "code": -5, 00:18:16.972 "message": "Input/output error" 00:18:16.972 } 00:18:16.972 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:16.972 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.972 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.972 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.972 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:16.972 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:16.972 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:17.273 nvme0n1 00:18:17.273 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:17.273 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:17.273 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.574 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.574 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.574 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.574 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:17.574 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.574 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.861 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.861 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:17.861 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:17.861 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:18.438 nvme0n1 00:18:18.438 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:18.438 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:18.438 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.698 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.698 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:18.698 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.698 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.698 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.698 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:18.698 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:18.698 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.698 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.698 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:18:18.698 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: --dhchap-ctrl-secret DHHC-1:03:OWQ0YTNlYWQxM2ExYzk1OTQwYTgxNDRlNjg4NzcwYmY1Y2YwZGJlMDc1ZjA5Njk2YWQ2MzU2ZDZjNDI0ZDlhZoLvUww=: 00:18:19.637 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:19.637 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:19.637 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:19.637 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:19.637 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:19.637 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:19.637 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:19.637 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.637 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.637 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:19.637 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:19.637 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:19.637 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:19.637 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.637 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:19.637 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.637 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:19.637 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:19.637 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:20.208 request: 00:18:20.208 { 00:18:20.208 "name": "nvme0", 00:18:20.208 "trtype": "tcp", 00:18:20.208 "traddr": "10.0.0.2", 00:18:20.208 "adrfam": "ipv4", 00:18:20.208 "trsvcid": "4420", 00:18:20.208 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.208 "prchk_reftag": false, 00:18:20.208 "prchk_guard": false, 00:18:20.208 "hdgst": false, 00:18:20.208 "ddgst": false, 00:18:20.208 "dhchap_key": "key1", 00:18:20.208 "allow_unrecognized_csi": false, 00:18:20.208 "method": "bdev_nvme_attach_controller", 00:18:20.208 "req_id": 1 00:18:20.208 } 00:18:20.208 Got JSON-RPC error response 00:18:20.208 response: 00:18:20.208 { 00:18:20.208 "code": -5, 00:18:20.208 "message": "Input/output error" 00:18:20.208 } 00:18:20.208 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:20.208 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.208 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.208 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.208 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:20.208 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:20.208 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:20.780 nvme0n1 00:18:20.780 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:20.780 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:20.780 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.041 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.041 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.041 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.041 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.041 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.041 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.302 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.302 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:21.302 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:21.302 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:21.302 nvme0n1 00:18:21.302 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:21.302 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:21.302 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.562 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.562 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.562 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: '' 2s 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: ]] 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZmUzYzBlOWEyOTJjMTZhMDg3MGY0NmJlYjM1NGI5ZTMrWfH0: 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:21.823 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:23.734 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:23.734 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:23.734 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:23.734 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:23.734 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:23.734 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:23.734 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:23.734 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:23.734 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.734 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: 2s 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: ]] 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MGVhMDYwMDE2ZjQ4M2I2M2Y5MDdhYmNlOGUyZmUzNjQwNjI1ZmMxYzM3ODhhZjZi/i7R9w==: 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:23.994 09:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:25.908 09:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:26.851 nvme0n1 00:18:26.851 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:26.851 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.851 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.851 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.851 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:26.851 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.112 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:27.112 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:27.112 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.374 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.374 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.374 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.374 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.374 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.374 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:27.374 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:27.636 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:27.636 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:27.636 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:27.636 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.207 request: 00:18:28.207 { 00:18:28.207 "name": "nvme0", 00:18:28.207 "dhchap_key": "key1", 00:18:28.207 "dhchap_ctrlr_key": "key3", 00:18:28.207 "method": "bdev_nvme_set_keys", 00:18:28.207 "req_id": 1 00:18:28.207 } 00:18:28.207 Got JSON-RPC error response 00:18:28.207 response: 00:18:28.207 { 00:18:28.207 "code": -13, 00:18:28.207 "message": "Permission denied" 00:18:28.207 } 00:18:28.207 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.207 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.207 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.207 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.207 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:28.207 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:28.207 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.467 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:28.467 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:29.409 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:29.409 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:29.409 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.409 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:29.409 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:29.409 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.670 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.670 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.670 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:29.670 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:29.670 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:30.240 nvme0n1 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:30.240 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:30.809 request: 00:18:30.809 { 00:18:30.809 "name": "nvme0", 00:18:30.809 "dhchap_key": "key2", 00:18:30.809 "dhchap_ctrlr_key": "key0", 00:18:30.809 "method": "bdev_nvme_set_keys", 00:18:30.809 "req_id": 1 00:18:30.809 } 00:18:30.809 Got JSON-RPC error response 00:18:30.809 response: 00:18:30.809 { 00:18:30.809 "code": -13, 00:18:30.809 "message": "Permission denied" 00:18:30.809 } 00:18:30.809 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:30.809 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.809 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.809 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.809 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:30.809 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:30.809 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.069 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:31.069 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:32.008 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:32.008 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:32.008 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 663663 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 663663 ']' 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 663663 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 663663 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 663663' 00:18:32.269 killing process with pid 663663 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 663663 00:18:32.269 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 663663 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:32.529 rmmod nvme_tcp 00:18:32.529 rmmod nvme_fabrics 00:18:32.529 rmmod nvme_keyring 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 689946 ']' 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 689946 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 689946 ']' 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 689946 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 689946 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 689946' 00:18:32.529 killing process with pid 689946 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 689946 00:18:32.529 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 689946 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.529 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Elk /tmp/spdk.key-sha256.RWX /tmp/spdk.key-sha384.ofo /tmp/spdk.key-sha512.SoB /tmp/spdk.key-sha512.hAo /tmp/spdk.key-sha384.V7R /tmp/spdk.key-sha256.1vB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:35.076 00:18:35.076 real 2m36.750s 00:18:35.076 user 5m52.841s 00:18:35.076 sys 0m24.758s 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.076 ************************************ 00:18:35.076 END TEST nvmf_auth_target 00:18:35.076 ************************************ 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.076 ************************************ 00:18:35.076 START TEST nvmf_bdevio_no_huge 00:18:35.076 ************************************ 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:35.076 * Looking for test storage... 00:18:35.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:35.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.076 --rc genhtml_branch_coverage=1 00:18:35.076 --rc genhtml_function_coverage=1 00:18:35.076 --rc genhtml_legend=1 00:18:35.076 --rc geninfo_all_blocks=1 00:18:35.076 --rc geninfo_unexecuted_blocks=1 00:18:35.076 00:18:35.076 ' 00:18:35.076 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:35.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.076 --rc genhtml_branch_coverage=1 00:18:35.077 --rc genhtml_function_coverage=1 00:18:35.077 --rc genhtml_legend=1 00:18:35.077 --rc geninfo_all_blocks=1 00:18:35.077 --rc geninfo_unexecuted_blocks=1 00:18:35.077 00:18:35.077 ' 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:35.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.077 --rc genhtml_branch_coverage=1 00:18:35.077 --rc genhtml_function_coverage=1 00:18:35.077 --rc genhtml_legend=1 00:18:35.077 --rc geninfo_all_blocks=1 00:18:35.077 --rc geninfo_unexecuted_blocks=1 00:18:35.077 00:18:35.077 ' 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:35.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.077 --rc genhtml_branch_coverage=1 00:18:35.077 --rc genhtml_function_coverage=1 00:18:35.077 --rc genhtml_legend=1 00:18:35.077 --rc geninfo_all_blocks=1 00:18:35.077 --rc geninfo_unexecuted_blocks=1 00:18:35.077 00:18:35.077 ' 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:35.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:35.077 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:43.216 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:43.216 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:43.216 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:43.216 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:43.216 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:43.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:18:43.217 00:18:43.217 --- 10.0.0.2 ping statistics --- 00:18:43.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.217 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:43.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:18:43.217 00:18:43.217 --- 10.0.0.1 ping statistics --- 00:18:43.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.217 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=698108 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 698108 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 698108 ']' 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.217 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.217 [2024-11-20 09:04:08.023518] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:18:43.217 [2024-11-20 09:04:08.023592] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:43.217 [2024-11-20 09:04:08.132460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.217 [2024-11-20 09:04:08.192975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.217 [2024-11-20 09:04:08.193027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.217 [2024-11-20 09:04:08.193035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.217 [2024-11-20 09:04:08.193042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.217 [2024-11-20 09:04:08.193048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.217 [2024-11-20 09:04:08.194545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:43.217 [2024-11-20 09:04:08.194703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:43.217 [2024-11-20 09:04:08.194880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:43.217 [2024-11-20 09:04:08.194880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.477 [2024-11-20 09:04:08.901690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.477 Malloc0 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.477 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.478 [2024-11-20 09:04:08.955505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:43.478 { 00:18:43.478 "params": { 00:18:43.478 "name": "Nvme$subsystem", 00:18:43.478 "trtype": "$TEST_TRANSPORT", 00:18:43.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:43.478 "adrfam": "ipv4", 00:18:43.478 "trsvcid": "$NVMF_PORT", 00:18:43.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:43.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:43.478 "hdgst": ${hdgst:-false}, 00:18:43.478 "ddgst": ${ddgst:-false} 00:18:43.478 }, 00:18:43.478 "method": "bdev_nvme_attach_controller" 00:18:43.478 } 00:18:43.478 EOF 00:18:43.478 )") 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:43.478 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:43.478 "params": { 00:18:43.478 "name": "Nvme1", 00:18:43.478 "trtype": "tcp", 00:18:43.478 "traddr": "10.0.0.2", 00:18:43.478 "adrfam": "ipv4", 00:18:43.478 "trsvcid": "4420", 00:18:43.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.478 "hdgst": false, 00:18:43.478 "ddgst": false 00:18:43.478 }, 00:18:43.478 "method": "bdev_nvme_attach_controller" 00:18:43.478 }' 00:18:43.739 [2024-11-20 09:04:09.023334] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:18:43.739 [2024-11-20 09:04:09.023412] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid698311 ] 00:18:43.739 [2024-11-20 09:04:09.122026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:43.739 [2024-11-20 09:04:09.182778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.739 [2024-11-20 09:04:09.182938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.739 [2024-11-20 09:04:09.182938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.999 I/O targets: 00:18:43.999 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:43.999 00:18:43.999 00:18:43.999 CUnit - A unit testing framework for C - Version 2.1-3 00:18:43.999 http://cunit.sourceforge.net/ 00:18:43.999 00:18:43.999 00:18:43.999 Suite: bdevio tests on: Nvme1n1 00:18:43.999 Test: blockdev write read block ...passed 00:18:43.999 Test: blockdev write zeroes read block ...passed 00:18:43.999 Test: blockdev write zeroes read no split ...passed 00:18:44.260 Test: blockdev write zeroes read split ...passed 00:18:44.260 Test: blockdev write zeroes read split partial ...passed 00:18:44.260 Test: blockdev reset ...[2024-11-20 09:04:09.542908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:44.260 [2024-11-20 09:04:09.543012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b800 (9): Bad file descriptor 00:18:44.260 [2024-11-20 09:04:09.555079] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:44.260 passed 00:18:44.260 Test: blockdev write read 8 blocks ...passed 00:18:44.260 Test: blockdev write read size > 128k ...passed 00:18:44.260 Test: blockdev write read invalid size ...passed 00:18:44.260 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:44.260 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:44.260 Test: blockdev write read max offset ...passed 00:18:44.260 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:44.260 Test: blockdev writev readv 8 blocks ...passed 00:18:44.260 Test: blockdev writev readv 30 x 1block ...passed 00:18:44.260 Test: blockdev writev readv block ...passed 00:18:44.260 Test: blockdev writev readv size > 128k ...passed 00:18:44.260 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:44.260 Test: blockdev comparev and writev ...[2024-11-20 09:04:09.781797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:44.260 [2024-11-20 09:04:09.781846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.260 [2024-11-20 09:04:09.781862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:44.260 [2024-11-20 09:04:09.781871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:44.260 [2024-11-20 09:04:09.782446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:44.260 [2024-11-20 09:04:09.782461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:44.260 [2024-11-20 09:04:09.782476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:44.260 [2024-11-20 09:04:09.782484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:44.260 [2024-11-20 09:04:09.783062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:44.260 [2024-11-20 09:04:09.783075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:44.260 [2024-11-20 09:04:09.783090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:44.260 [2024-11-20 09:04:09.783098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:44.260 [2024-11-20 09:04:09.783656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:44.260 [2024-11-20 09:04:09.783671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:44.260 [2024-11-20 09:04:09.783686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:44.260 [2024-11-20 09:04:09.783694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:44.522 passed 00:18:44.522 Test: blockdev nvme passthru rw ...passed 00:18:44.522 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:04:09.868862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:44.522 [2024-11-20 09:04:09.868879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:44.522 [2024-11-20 09:04:09.869279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:44.522 [2024-11-20 09:04:09.869295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:44.522 [2024-11-20 09:04:09.869684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:44.522 [2024-11-20 09:04:09.869698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:44.522 [2024-11-20 09:04:09.870087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:44.522 [2024-11-20 09:04:09.870100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:44.522 passed 00:18:44.522 Test: blockdev nvme admin passthru ...passed 00:18:44.522 Test: blockdev copy ...passed 00:18:44.522 00:18:44.522 Run Summary: Type Total Ran Passed Failed Inactive 00:18:44.522 suites 1 1 n/a 0 0 00:18:44.522 tests 23 23 23 0 0 00:18:44.522 asserts 152 152 152 0 n/a 00:18:44.522 00:18:44.522 Elapsed time = 1.046 seconds 00:18:44.783 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:44.783 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.783 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.783 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.783 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:44.783 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:44.783 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:44.783 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:44.783 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:44.784 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:44.784 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:44.784 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:44.784 rmmod nvme_tcp 00:18:44.784 rmmod nvme_fabrics 00:18:44.784 rmmod nvme_keyring 00:18:44.784 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 698108 ']' 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 698108 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 698108 ']' 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 698108 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 698108 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 698108' 00:18:45.045 killing process with pid 698108 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 698108 00:18:45.045 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 698108 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.306 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.852 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:47.852 00:18:47.852 real 0m12.637s 00:18:47.852 user 0m14.080s 00:18:47.852 sys 0m6.886s 00:18:47.852 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.852 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.852 ************************************ 00:18:47.852 END TEST nvmf_bdevio_no_huge 00:18:47.852 ************************************ 00:18:47.852 09:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:47.852 09:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:47.852 09:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.852 09:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:47.852 ************************************ 00:18:47.852 START TEST nvmf_tls 00:18:47.852 ************************************ 00:18:47.852 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:47.852 * Looking for test storage... 00:18:47.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:47.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.852 --rc genhtml_branch_coverage=1 00:18:47.852 --rc genhtml_function_coverage=1 00:18:47.852 --rc genhtml_legend=1 00:18:47.852 --rc geninfo_all_blocks=1 00:18:47.852 --rc geninfo_unexecuted_blocks=1 00:18:47.852 00:18:47.852 ' 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:47.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.852 --rc genhtml_branch_coverage=1 00:18:47.852 --rc genhtml_function_coverage=1 00:18:47.852 --rc genhtml_legend=1 00:18:47.852 --rc geninfo_all_blocks=1 00:18:47.852 --rc geninfo_unexecuted_blocks=1 00:18:47.852 00:18:47.852 ' 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:47.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.852 --rc genhtml_branch_coverage=1 00:18:47.852 --rc genhtml_function_coverage=1 00:18:47.852 --rc genhtml_legend=1 00:18:47.852 --rc geninfo_all_blocks=1 00:18:47.852 --rc geninfo_unexecuted_blocks=1 00:18:47.852 00:18:47.852 ' 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:47.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.852 --rc genhtml_branch_coverage=1 00:18:47.852 --rc genhtml_function_coverage=1 00:18:47.852 --rc genhtml_legend=1 00:18:47.852 --rc geninfo_all_blocks=1 00:18:47.852 --rc geninfo_unexecuted_blocks=1 00:18:47.852 00:18:47.852 ' 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.852 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:47.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:47.853 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:56.057 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:56.057 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.057 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:56.058 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:56.058 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:56.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:18:56.058 00:18:56.058 --- 10.0.0.2 ping statistics --- 00:18:56.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.058 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:18:56.058 00:18:56.058 --- 10.0.0.1 ping statistics --- 00:18:56.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.058 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=702804 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 702804 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 702804 ']' 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.058 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.058 [2024-11-20 09:04:20.753575] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:18:56.058 [2024-11-20 09:04:20.753643] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.058 [2024-11-20 09:04:20.853881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.058 [2024-11-20 09:04:20.904677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.058 [2024-11-20 09:04:20.904729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.058 [2024-11-20 09:04:20.904738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.058 [2024-11-20 09:04:20.904746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.058 [2024-11-20 09:04:20.904753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.058 [2024-11-20 09:04:20.905553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.058 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.058 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:56.058 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:56.058 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.058 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.319 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.319 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:56.319 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:56.319 true 00:18:56.319 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:56.319 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:56.579 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:56.579 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:56.579 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:56.840 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:56.840 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:56.840 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:56.840 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:56.840 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:57.100 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.100 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:57.360 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:57.360 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:57.360 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:57.360 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.620 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:57.620 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:57.620 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:57.620 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.620 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:57.881 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:57.881 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:57.881 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:58.142 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.142 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:58.142 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:58.142 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:58.142 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:58.142 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:58.142 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:58.142 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:58.142 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:58.142 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:58.142 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:58.402 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:58.402 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:58.402 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:58.402 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:58.402 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:58.402 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.rHCUioZbm0 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.W9NWir0pvC 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.rHCUioZbm0 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.W9NWir0pvC 00:18:58.403 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:58.663 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:58.923 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.rHCUioZbm0 00:18:58.923 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rHCUioZbm0 00:18:58.923 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:58.923 [2024-11-20 09:04:24.341004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.923 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:59.183 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:59.183 [2024-11-20 09:04:24.677820] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.183 [2024-11-20 09:04:24.678020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.183 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:59.443 malloc0 00:18:59.443 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:59.703 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rHCUioZbm0 00:18:59.703 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:59.962 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rHCUioZbm0 00:19:12.187 Initializing NVMe Controllers 00:19:12.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:12.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:12.187 Initialization complete. Launching workers. 00:19:12.187 ======================================================== 00:19:12.187 Latency(us) 00:19:12.187 Device Information : IOPS MiB/s Average min max 00:19:12.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18934.37 73.96 3380.27 1148.57 3924.60 00:19:12.187 ======================================================== 00:19:12.187 Total : 18934.37 73.96 3380.27 1148.57 3924.60 00:19:12.187 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rHCUioZbm0 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rHCUioZbm0 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=705760 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 705760 /var/tmp/bdevperf.sock 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 705760 ']' 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.187 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.187 [2024-11-20 09:04:35.560914] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:12.187 [2024-11-20 09:04:35.560969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705760 ] 00:19:12.187 [2024-11-20 09:04:35.647334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.187 [2024-11-20 09:04:35.682600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.187 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.187 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:12.187 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rHCUioZbm0 00:19:12.187 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.187 [2024-11-20 09:04:36.678377] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.187 TLSTESTn1 00:19:12.187 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:12.187 Running I/O for 10 seconds... 00:19:13.387 4149.00 IOPS, 16.21 MiB/s [2024-11-20T08:04:40.298Z] 4956.50 IOPS, 19.36 MiB/s [2024-11-20T08:04:41.239Z] 5007.33 IOPS, 19.56 MiB/s [2024-11-20T08:04:42.179Z] 5126.50 IOPS, 20.03 MiB/s [2024-11-20T08:04:43.121Z] 5070.60 IOPS, 19.81 MiB/s [2024-11-20T08:04:44.061Z] 5086.17 IOPS, 19.87 MiB/s [2024-11-20T08:04:45.094Z] 5282.86 IOPS, 20.64 MiB/s [2024-11-20T08:04:46.090Z] 5391.12 IOPS, 21.06 MiB/s [2024-11-20T08:04:47.033Z] 5420.00 IOPS, 21.17 MiB/s [2024-11-20T08:04:47.033Z] 5384.70 IOPS, 21.03 MiB/s 00:19:21.504 Latency(us) 00:19:21.504 [2024-11-20T08:04:47.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.504 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:21.504 Verification LBA range: start 0x0 length 0x2000 00:19:21.504 TLSTESTn1 : 10.01 5391.04 21.06 0.00 0.00 23709.27 4560.21 88255.15 00:19:21.504 [2024-11-20T08:04:47.033Z] =================================================================================================================== 00:19:21.504 [2024-11-20T08:04:47.033Z] Total : 5391.04 21.06 0.00 0.00 23709.27 4560.21 88255.15 00:19:21.504 { 00:19:21.504 "results": [ 00:19:21.504 { 00:19:21.504 "job": "TLSTESTn1", 00:19:21.504 "core_mask": "0x4", 00:19:21.504 "workload": "verify", 00:19:21.504 "status": "finished", 00:19:21.504 "verify_range": { 00:19:21.504 "start": 0, 00:19:21.504 "length": 8192 00:19:21.504 }, 00:19:21.504 "queue_depth": 128, 00:19:21.504 "io_size": 4096, 00:19:21.504 "runtime": 10.011433, 00:19:21.504 "iops": 5391.036428051809, 00:19:21.504 "mibps": 21.058736047077378, 00:19:21.504 "io_failed": 0, 00:19:21.504 "io_timeout": 0, 00:19:21.504 "avg_latency_us": 23709.271616393686, 00:19:21.504 "min_latency_us": 4560.213333333333, 00:19:21.504 "max_latency_us": 88255.14666666667 00:19:21.504 } 00:19:21.504 ], 00:19:21.504 "core_count": 1 00:19:21.504 } 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 705760 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 705760 ']' 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 705760 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 705760 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 705760' 00:19:21.504 killing process with pid 705760 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 705760 00:19:21.504 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.504 00:19:21.504 Latency(us) 00:19:21.504 [2024-11-20T08:04:47.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.504 [2024-11-20T08:04:47.033Z] =================================================================================================================== 00:19:21.504 [2024-11-20T08:04:47.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.504 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 705760 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.W9NWir0pvC 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.W9NWir0pvC 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.W9NWir0pvC 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.W9NWir0pvC 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=707895 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 707895 /var/tmp/bdevperf.sock 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 707895 ']' 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.765 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.765 [2024-11-20 09:04:47.150181] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:21.765 [2024-11-20 09:04:47.150238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707895 ] 00:19:21.765 [2024-11-20 09:04:47.235803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.765 [2024-11-20 09:04:47.264735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.706 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.706 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:22.707 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.W9NWir0pvC 00:19:22.707 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:22.967 [2024-11-20 09:04:48.235186] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.967 [2024-11-20 09:04:48.245528] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:22.967 [2024-11-20 09:04:48.246244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176cbb0 (107): Transport endpoint is not connected 00:19:22.967 [2024-11-20 09:04:48.247239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176cbb0 (9): Bad file descriptor 00:19:22.967 [2024-11-20 09:04:48.248241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:22.967 [2024-11-20 09:04:48.248249] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:22.967 [2024-11-20 09:04:48.248254] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:22.967 [2024-11-20 09:04:48.248262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:22.967 request: 00:19:22.967 { 00:19:22.967 "name": "TLSTEST", 00:19:22.967 "trtype": "tcp", 00:19:22.967 "traddr": "10.0.0.2", 00:19:22.967 "adrfam": "ipv4", 00:19:22.967 "trsvcid": "4420", 00:19:22.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.968 "prchk_reftag": false, 00:19:22.968 "prchk_guard": false, 00:19:22.968 "hdgst": false, 00:19:22.968 "ddgst": false, 00:19:22.968 "psk": "key0", 00:19:22.968 "allow_unrecognized_csi": false, 00:19:22.968 "method": "bdev_nvme_attach_controller", 00:19:22.968 "req_id": 1 00:19:22.968 } 00:19:22.968 Got JSON-RPC error response 00:19:22.968 response: 00:19:22.968 { 00:19:22.968 "code": -5, 00:19:22.968 "message": "Input/output error" 00:19:22.968 } 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 707895 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 707895 ']' 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 707895 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 707895 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 707895' 00:19:22.968 killing process with pid 707895 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 707895 00:19:22.968 Received shutdown signal, test time was about 10.000000 seconds 00:19:22.968 00:19:22.968 Latency(us) 00:19:22.968 [2024-11-20T08:04:48.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.968 [2024-11-20T08:04:48.497Z] =================================================================================================================== 00:19:22.968 [2024-11-20T08:04:48.497Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 707895 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rHCUioZbm0 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rHCUioZbm0 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rHCUioZbm0 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rHCUioZbm0 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=708234 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 708234 /var/tmp/bdevperf.sock 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 708234 ']' 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.968 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.968 [2024-11-20 09:04:48.475861] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:22.968 [2024-11-20 09:04:48.475919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708234 ] 00:19:23.230 [2024-11-20 09:04:48.561108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.230 [2024-11-20 09:04:48.588990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.801 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.801 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:23.801 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rHCUioZbm0 00:19:24.062 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:24.324 [2024-11-20 09:04:49.611862] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.324 [2024-11-20 09:04:49.618768] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:24.324 [2024-11-20 09:04:49.618787] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:24.324 [2024-11-20 09:04:49.618805] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:24.324 [2024-11-20 09:04:49.619011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68fbb0 (107): Transport endpoint is not connected 00:19:24.324 [2024-11-20 09:04:49.620008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68fbb0 (9): Bad file descriptor 00:19:24.324 [2024-11-20 09:04:49.621010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:24.324 [2024-11-20 09:04:49.621017] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:24.324 [2024-11-20 09:04:49.621023] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:24.324 [2024-11-20 09:04:49.621031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:24.324 request: 00:19:24.324 { 00:19:24.324 "name": "TLSTEST", 00:19:24.324 "trtype": "tcp", 00:19:24.324 "traddr": "10.0.0.2", 00:19:24.324 "adrfam": "ipv4", 00:19:24.324 "trsvcid": "4420", 00:19:24.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.324 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:24.324 "prchk_reftag": false, 00:19:24.324 "prchk_guard": false, 00:19:24.324 "hdgst": false, 00:19:24.324 "ddgst": false, 00:19:24.324 "psk": "key0", 00:19:24.324 "allow_unrecognized_csi": false, 00:19:24.324 "method": "bdev_nvme_attach_controller", 00:19:24.324 "req_id": 1 00:19:24.324 } 00:19:24.324 Got JSON-RPC error response 00:19:24.324 response: 00:19:24.324 { 00:19:24.324 "code": -5, 00:19:24.324 "message": "Input/output error" 00:19:24.324 } 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 708234 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 708234 ']' 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 708234 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 708234 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 708234' 00:19:24.324 killing process with pid 708234 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 708234 00:19:24.324 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.324 00:19:24.324 Latency(us) 00:19:24.324 [2024-11-20T08:04:49.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.324 [2024-11-20T08:04:49.853Z] =================================================================================================================== 00:19:24.324 [2024-11-20T08:04:49.853Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 708234 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rHCUioZbm0 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rHCUioZbm0 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rHCUioZbm0 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rHCUioZbm0 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=708580 00:19:24.324 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.325 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 708580 /var/tmp/bdevperf.sock 00:19:24.325 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.325 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 708580 ']' 00:19:24.325 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.325 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.325 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.325 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.325 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.586 [2024-11-20 09:04:49.875035] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:24.586 [2024-11-20 09:04:49.875090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708580 ] 00:19:24.586 [2024-11-20 09:04:49.958336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.586 [2024-11-20 09:04:49.986563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.158 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.158 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.158 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rHCUioZbm0 00:19:25.419 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:25.679 [2024-11-20 09:04:51.009979] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.679 [2024-11-20 09:04:51.014497] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:25.679 [2024-11-20 09:04:51.014516] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:25.679 [2024-11-20 09:04:51.014534] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:25.679 [2024-11-20 09:04:51.015193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2bb0 (107): Transport endpoint is not connected 00:19:25.679 [2024-11-20 09:04:51.016186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2bb0 (9): Bad file descriptor 00:19:25.679 [2024-11-20 09:04:51.017188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:25.679 [2024-11-20 09:04:51.017199] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:25.679 [2024-11-20 09:04:51.017205] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:25.679 [2024-11-20 09:04:51.017213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:25.679 request: 00:19:25.679 { 00:19:25.679 "name": "TLSTEST", 00:19:25.679 "trtype": "tcp", 00:19:25.679 "traddr": "10.0.0.2", 00:19:25.679 "adrfam": "ipv4", 00:19:25.679 "trsvcid": "4420", 00:19:25.679 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:25.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.679 "prchk_reftag": false, 00:19:25.680 "prchk_guard": false, 00:19:25.680 "hdgst": false, 00:19:25.680 "ddgst": false, 00:19:25.680 "psk": "key0", 00:19:25.680 "allow_unrecognized_csi": false, 00:19:25.680 "method": "bdev_nvme_attach_controller", 00:19:25.680 "req_id": 1 00:19:25.680 } 00:19:25.680 Got JSON-RPC error response 00:19:25.680 response: 00:19:25.680 { 00:19:25.680 "code": -5, 00:19:25.680 "message": "Input/output error" 00:19:25.680 } 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 708580 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 708580 ']' 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 708580 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 708580 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 708580' 00:19:25.680 killing process with pid 708580 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 708580 00:19:25.680 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.680 00:19:25.680 Latency(us) 00:19:25.680 [2024-11-20T08:04:51.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.680 [2024-11-20T08:04:51.209Z] =================================================================================================================== 00:19:25.680 [2024-11-20T08:04:51.209Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 708580 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.680 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=708885 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 708885 /var/tmp/bdevperf.sock 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 708885 ']' 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.941 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.941 [2024-11-20 09:04:51.258724] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:25.941 [2024-11-20 09:04:51.258779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708885 ] 00:19:25.941 [2024-11-20 09:04:51.341607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.941 [2024-11-20 09:04:51.369931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.882 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.882 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.882 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:26.882 [2024-11-20 09:04:52.215907] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:26.882 [2024-11-20 09:04:52.215931] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:26.882 request: 00:19:26.882 { 00:19:26.882 "name": "key0", 00:19:26.882 "path": "", 00:19:26.882 "method": "keyring_file_add_key", 00:19:26.882 "req_id": 1 00:19:26.882 } 00:19:26.882 Got JSON-RPC error response 00:19:26.882 response: 00:19:26.882 { 00:19:26.882 "code": -1, 00:19:26.882 "message": "Operation not permitted" 00:19:26.882 } 00:19:26.882 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:26.882 [2024-11-20 09:04:52.396440] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.882 [2024-11-20 09:04:52.396462] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:26.882 request: 00:19:26.882 { 00:19:26.882 "name": "TLSTEST", 00:19:26.882 "trtype": "tcp", 00:19:26.882 "traddr": "10.0.0.2", 00:19:26.882 "adrfam": "ipv4", 00:19:26.882 "trsvcid": "4420", 00:19:26.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.882 "prchk_reftag": false, 00:19:26.882 "prchk_guard": false, 00:19:26.882 "hdgst": false, 00:19:26.882 "ddgst": false, 00:19:26.882 "psk": "key0", 00:19:26.882 "allow_unrecognized_csi": false, 00:19:26.882 "method": "bdev_nvme_attach_controller", 00:19:26.882 "req_id": 1 00:19:26.882 } 00:19:26.882 Got JSON-RPC error response 00:19:26.882 response: 00:19:26.882 { 00:19:26.882 "code": -126, 00:19:26.882 "message": "Required key not available" 00:19:26.882 } 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 708885 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 708885 ']' 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 708885 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 708885 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 708885' 00:19:27.143 killing process with pid 708885 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 708885 00:19:27.143 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.143 00:19:27.143 Latency(us) 00:19:27.143 [2024-11-20T08:04:52.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.143 [2024-11-20T08:04:52.672Z] =================================================================================================================== 00:19:27.143 [2024-11-20T08:04:52.672Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 708885 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 702804 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 702804 ']' 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 702804 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 702804 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 702804' 00:19:27.143 killing process with pid 702804 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 702804 00:19:27.143 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 702804 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.rBZjjuoquE 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.rBZjjuoquE 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=709182 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 709182 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 709182 ']' 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.404 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.404 [2024-11-20 09:04:52.866708] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:27.404 [2024-11-20 09:04:52.866768] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.666 [2024-11-20 09:04:52.958870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.666 [2024-11-20 09:04:52.998462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.666 [2024-11-20 09:04:52.998507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.666 [2024-11-20 09:04:52.998515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.666 [2024-11-20 09:04:52.998521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.666 [2024-11-20 09:04:52.998527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.666 [2024-11-20 09:04:52.999137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.237 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.237 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.237 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.237 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.237 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.237 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.237 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.rBZjjuoquE 00:19:28.237 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rBZjjuoquE 00:19:28.237 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.497 [2024-11-20 09:04:53.869691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.497 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:28.758 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:28.758 [2024-11-20 09:04:54.234582] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.758 [2024-11-20 09:04:54.234764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.758 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:29.018 malloc0 00:19:29.019 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.289 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rBZjjuoquE 00:19:29.289 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rBZjjuoquE 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rBZjjuoquE 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=709642 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 709642 /var/tmp/bdevperf.sock 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 709642 ']' 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.555 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.555 [2024-11-20 09:04:55.013998] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:29.555 [2024-11-20 09:04:55.014049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709642 ] 00:19:29.816 [2024-11-20 09:04:55.097964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.816 [2024-11-20 09:04:55.126725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.389 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.389 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:30.389 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rBZjjuoquE 00:19:30.650 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.650 [2024-11-20 09:04:56.157263] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.910 TLSTESTn1 00:19:30.911 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:30.911 Running I/O for 10 seconds... 00:19:33.238 6606.00 IOPS, 25.80 MiB/s [2024-11-20T08:04:59.724Z] 6497.50 IOPS, 25.38 MiB/s [2024-11-20T08:05:00.666Z] 6556.67 IOPS, 25.61 MiB/s [2024-11-20T08:05:01.609Z] 6513.00 IOPS, 25.44 MiB/s [2024-11-20T08:05:02.550Z] 6449.00 IOPS, 25.19 MiB/s [2024-11-20T08:05:03.492Z] 6412.67 IOPS, 25.05 MiB/s [2024-11-20T08:05:04.433Z] 6389.14 IOPS, 24.96 MiB/s [2024-11-20T08:05:05.375Z] 6351.25 IOPS, 24.81 MiB/s [2024-11-20T08:05:06.760Z] 6349.11 IOPS, 24.80 MiB/s [2024-11-20T08:05:06.760Z] 6360.30 IOPS, 24.84 MiB/s 00:19:41.231 Latency(us) 00:19:41.231 [2024-11-20T08:05:06.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.231 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:41.231 Verification LBA range: start 0x0 length 0x2000 00:19:41.232 TLSTESTn1 : 10.01 6364.44 24.86 0.00 0.00 20083.20 4778.67 46530.56 00:19:41.232 [2024-11-20T08:05:06.761Z] =================================================================================================================== 00:19:41.232 [2024-11-20T08:05:06.761Z] Total : 6364.44 24.86 0.00 0.00 20083.20 4778.67 46530.56 00:19:41.232 { 00:19:41.232 "results": [ 00:19:41.232 { 00:19:41.232 "job": "TLSTESTn1", 00:19:41.232 "core_mask": "0x4", 00:19:41.232 "workload": "verify", 00:19:41.232 "status": "finished", 00:19:41.232 "verify_range": { 00:19:41.232 "start": 0, 00:19:41.232 "length": 8192 00:19:41.232 }, 00:19:41.232 "queue_depth": 128, 00:19:41.232 "io_size": 4096, 00:19:41.232 "runtime": 10.013451, 00:19:41.232 "iops": 6364.439192841709, 00:19:41.232 "mibps": 24.861090597037926, 00:19:41.232 "io_failed": 0, 00:19:41.232 "io_timeout": 0, 00:19:41.232 "avg_latency_us": 20083.19552779957, 00:19:41.232 "min_latency_us": 4778.666666666667, 00:19:41.232 "max_latency_us": 46530.56 00:19:41.232 } 00:19:41.232 ], 00:19:41.232 "core_count": 1 00:19:41.232 } 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 709642 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 709642 ']' 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 709642 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 709642 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 709642' 00:19:41.232 killing process with pid 709642 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 709642 00:19:41.232 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.232 00:19:41.232 Latency(us) 00:19:41.232 [2024-11-20T08:05:06.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.232 [2024-11-20T08:05:06.761Z] =================================================================================================================== 00:19:41.232 [2024-11-20T08:05:06.761Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 709642 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.rBZjjuoquE 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rBZjjuoquE 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rBZjjuoquE 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rBZjjuoquE 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rBZjjuoquE 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=711847 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 711847 /var/tmp/bdevperf.sock 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 711847 ']' 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.232 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.232 [2024-11-20 09:05:06.628759] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:41.232 [2024-11-20 09:05:06.628815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711847 ] 00:19:41.232 [2024-11-20 09:05:06.711231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.232 [2024-11-20 09:05:06.739123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.173 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.173 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:42.173 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rBZjjuoquE 00:19:42.173 [2024-11-20 09:05:07.585292] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rBZjjuoquE': 0100666 00:19:42.173 [2024-11-20 09:05:07.585320] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:42.173 request: 00:19:42.173 { 00:19:42.173 "name": "key0", 00:19:42.173 "path": "/tmp/tmp.rBZjjuoquE", 00:19:42.173 "method": "keyring_file_add_key", 00:19:42.173 "req_id": 1 00:19:42.173 } 00:19:42.173 Got JSON-RPC error response 00:19:42.173 response: 00:19:42.173 { 00:19:42.173 "code": -1, 00:19:42.173 "message": "Operation not permitted" 00:19:42.173 } 00:19:42.173 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:42.435 [2024-11-20 09:05:07.769830] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.435 [2024-11-20 09:05:07.769854] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:42.435 request: 00:19:42.435 { 00:19:42.435 "name": "TLSTEST", 00:19:42.435 "trtype": "tcp", 00:19:42.435 "traddr": "10.0.0.2", 00:19:42.435 "adrfam": "ipv4", 00:19:42.435 "trsvcid": "4420", 00:19:42.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.435 "prchk_reftag": false, 00:19:42.435 "prchk_guard": false, 00:19:42.435 "hdgst": false, 00:19:42.435 "ddgst": false, 00:19:42.435 "psk": "key0", 00:19:42.435 "allow_unrecognized_csi": false, 00:19:42.435 "method": "bdev_nvme_attach_controller", 00:19:42.435 "req_id": 1 00:19:42.435 } 00:19:42.435 Got JSON-RPC error response 00:19:42.435 response: 00:19:42.435 { 00:19:42.435 "code": -126, 00:19:42.435 "message": "Required key not available" 00:19:42.435 } 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 711847 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 711847 ']' 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 711847 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 711847 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 711847' 00:19:42.435 killing process with pid 711847 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 711847 00:19:42.435 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.435 00:19:42.435 Latency(us) 00:19:42.435 [2024-11-20T08:05:07.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.435 [2024-11-20T08:05:07.964Z] =================================================================================================================== 00:19:42.435 [2024-11-20T08:05:07.964Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 711847 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 709182 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 709182 ']' 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 709182 00:19:42.435 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:42.696 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.696 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 709182 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 709182' 00:19:42.696 killing process with pid 709182 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 709182 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 709182 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=712119 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 712119 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 712119 ']' 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.696 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.696 [2024-11-20 09:05:08.186000] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:42.696 [2024-11-20 09:05:08.186064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.956 [2024-11-20 09:05:08.277118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.957 [2024-11-20 09:05:08.305865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.957 [2024-11-20 09:05:08.305894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.957 [2024-11-20 09:05:08.305899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.957 [2024-11-20 09:05:08.305904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.957 [2024-11-20 09:05:08.305908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.957 [2024-11-20 09:05:08.306358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.527 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.528 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:43.528 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:43.528 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.528 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.528 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.528 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.rBZjjuoquE 00:19:43.528 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:43.528 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rBZjjuoquE 00:19:43.528 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:43.528 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.528 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:43.528 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.528 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.rBZjjuoquE 00:19:43.528 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rBZjjuoquE 00:19:43.528 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:43.788 [2024-11-20 09:05:09.177693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.788 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:44.049 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:44.049 [2024-11-20 09:05:09.538582] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.049 [2024-11-20 09:05:09.538771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.049 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:44.310 malloc0 00:19:44.310 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:44.570 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rBZjjuoquE 00:19:44.570 [2024-11-20 09:05:10.085654] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rBZjjuoquE': 0100666 00:19:44.570 [2024-11-20 09:05:10.085679] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:44.570 request: 00:19:44.570 { 00:19:44.570 "name": "key0", 00:19:44.571 "path": "/tmp/tmp.rBZjjuoquE", 00:19:44.571 "method": "keyring_file_add_key", 00:19:44.571 "req_id": 1 00:19:44.571 } 00:19:44.571 Got JSON-RPC error response 00:19:44.571 response: 00:19:44.571 { 00:19:44.571 "code": -1, 00:19:44.571 "message": "Operation not permitted" 00:19:44.571 } 00:19:44.832 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:44.832 [2024-11-20 09:05:10.270135] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:44.832 [2024-11-20 09:05:10.270165] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:44.832 request: 00:19:44.832 { 00:19:44.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.832 "host": "nqn.2016-06.io.spdk:host1", 00:19:44.832 "psk": "key0", 00:19:44.832 "method": "nvmf_subsystem_add_host", 00:19:44.832 "req_id": 1 00:19:44.832 } 00:19:44.832 Got JSON-RPC error response 00:19:44.832 response: 00:19:44.832 { 00:19:44.832 "code": -32603, 00:19:44.832 "message": "Internal error" 00:19:44.832 } 00:19:44.832 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:44.832 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.832 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.832 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.832 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 712119 00:19:44.832 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 712119 ']' 00:19:44.832 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 712119 00:19:44.832 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:44.832 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.832 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 712119 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 712119' 00:19:45.093 killing process with pid 712119 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 712119 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 712119 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.rBZjjuoquE 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=712706 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 712706 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 712706 ']' 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.093 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.093 [2024-11-20 09:05:10.540073] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:45.093 [2024-11-20 09:05:10.540132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.356 [2024-11-20 09:05:10.630847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.356 [2024-11-20 09:05:10.663054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.356 [2024-11-20 09:05:10.663085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.356 [2024-11-20 09:05:10.663091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.356 [2024-11-20 09:05:10.663096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.356 [2024-11-20 09:05:10.663101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.356 [2024-11-20 09:05:10.663605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.927 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.927 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.927 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:45.927 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:45.927 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.927 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.927 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.rBZjjuoquE 00:19:45.927 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rBZjjuoquE 00:19:45.927 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:46.188 [2024-11-20 09:05:11.529530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.188 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:46.448 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:46.448 [2024-11-20 09:05:11.898432] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.448 [2024-11-20 09:05:11.898616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.449 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:46.708 malloc0 00:19:46.708 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:46.970 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rBZjjuoquE 00:19:46.970 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:47.231 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.231 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=713069 00:19:47.231 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.231 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 713069 /var/tmp/bdevperf.sock 00:19:47.231 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 713069 ']' 00:19:47.231 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.231 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.231 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.231 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.231 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.231 [2024-11-20 09:05:12.686617] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:47.231 [2024-11-20 09:05:12.686668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid713069 ] 00:19:47.492 [2024-11-20 09:05:12.773748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.492 [2024-11-20 09:05:12.808547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.063 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.063 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.063 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rBZjjuoquE 00:19:48.323 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.323 [2024-11-20 09:05:13.823882] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.583 TLSTESTn1 00:19:48.583 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:48.844 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:48.844 "subsystems": [ 00:19:48.844 { 00:19:48.844 "subsystem": "keyring", 00:19:48.844 "config": [ 00:19:48.844 { 00:19:48.844 "method": "keyring_file_add_key", 00:19:48.844 "params": { 00:19:48.844 "name": "key0", 00:19:48.844 "path": "/tmp/tmp.rBZjjuoquE" 00:19:48.844 } 00:19:48.844 } 00:19:48.844 ] 00:19:48.844 }, 00:19:48.844 { 00:19:48.844 "subsystem": "iobuf", 00:19:48.844 "config": [ 00:19:48.844 { 00:19:48.844 "method": "iobuf_set_options", 00:19:48.844 "params": { 00:19:48.844 "small_pool_count": 8192, 00:19:48.844 "large_pool_count": 1024, 00:19:48.844 "small_bufsize": 8192, 00:19:48.844 "large_bufsize": 135168, 00:19:48.844 "enable_numa": false 00:19:48.844 } 00:19:48.844 } 00:19:48.844 ] 00:19:48.844 }, 00:19:48.844 { 00:19:48.844 "subsystem": "sock", 00:19:48.844 "config": [ 00:19:48.844 { 00:19:48.844 "method": "sock_set_default_impl", 00:19:48.844 "params": { 00:19:48.844 "impl_name": "posix" 00:19:48.844 } 00:19:48.844 }, 00:19:48.844 { 00:19:48.844 "method": "sock_impl_set_options", 00:19:48.844 "params": { 00:19:48.844 "impl_name": "ssl", 00:19:48.844 "recv_buf_size": 4096, 00:19:48.844 "send_buf_size": 4096, 00:19:48.844 "enable_recv_pipe": true, 00:19:48.844 "enable_quickack": false, 00:19:48.844 "enable_placement_id": 0, 00:19:48.844 "enable_zerocopy_send_server": true, 00:19:48.844 "enable_zerocopy_send_client": false, 00:19:48.844 "zerocopy_threshold": 0, 00:19:48.844 "tls_version": 0, 00:19:48.844 "enable_ktls": false 00:19:48.844 } 00:19:48.844 }, 00:19:48.844 { 00:19:48.844 "method": "sock_impl_set_options", 00:19:48.844 "params": { 00:19:48.844 "impl_name": "posix", 00:19:48.844 "recv_buf_size": 2097152, 00:19:48.844 "send_buf_size": 2097152, 00:19:48.844 "enable_recv_pipe": true, 00:19:48.844 "enable_quickack": false, 00:19:48.844 "enable_placement_id": 0, 00:19:48.844 "enable_zerocopy_send_server": true, 00:19:48.844 "enable_zerocopy_send_client": false, 00:19:48.844 "zerocopy_threshold": 0, 00:19:48.844 "tls_version": 0, 00:19:48.844 "enable_ktls": false 00:19:48.844 } 00:19:48.844 } 00:19:48.844 ] 00:19:48.844 }, 00:19:48.844 { 00:19:48.844 "subsystem": "vmd", 00:19:48.844 "config": [] 00:19:48.844 }, 00:19:48.844 { 00:19:48.844 "subsystem": "accel", 00:19:48.844 "config": [ 00:19:48.844 { 00:19:48.844 "method": "accel_set_options", 00:19:48.844 "params": { 00:19:48.844 "small_cache_size": 128, 00:19:48.844 "large_cache_size": 16, 00:19:48.844 "task_count": 2048, 00:19:48.844 "sequence_count": 2048, 00:19:48.844 "buf_count": 2048 00:19:48.844 } 00:19:48.844 } 00:19:48.844 ] 00:19:48.844 }, 00:19:48.844 { 00:19:48.844 "subsystem": "bdev", 00:19:48.844 "config": [ 00:19:48.844 { 00:19:48.844 "method": "bdev_set_options", 00:19:48.844 "params": { 00:19:48.844 "bdev_io_pool_size": 65535, 00:19:48.844 "bdev_io_cache_size": 256, 00:19:48.844 "bdev_auto_examine": true, 00:19:48.844 "iobuf_small_cache_size": 128, 00:19:48.844 "iobuf_large_cache_size": 16 00:19:48.844 } 00:19:48.844 }, 00:19:48.844 { 00:19:48.844 "method": "bdev_raid_set_options", 00:19:48.844 "params": { 00:19:48.844 "process_window_size_kb": 1024, 00:19:48.844 "process_max_bandwidth_mb_sec": 0 00:19:48.844 } 00:19:48.844 }, 00:19:48.844 { 00:19:48.844 "method": "bdev_iscsi_set_options", 00:19:48.844 "params": { 00:19:48.844 "timeout_sec": 30 00:19:48.844 } 00:19:48.844 }, 00:19:48.844 { 00:19:48.844 "method": "bdev_nvme_set_options", 00:19:48.844 "params": { 00:19:48.844 "action_on_timeout": "none", 00:19:48.844 "timeout_us": 0, 00:19:48.844 "timeout_admin_us": 0, 00:19:48.844 "keep_alive_timeout_ms": 10000, 00:19:48.844 "arbitration_burst": 0, 00:19:48.844 "low_priority_weight": 0, 00:19:48.844 "medium_priority_weight": 0, 00:19:48.844 "high_priority_weight": 0, 00:19:48.844 "nvme_adminq_poll_period_us": 10000, 00:19:48.844 "nvme_ioq_poll_period_us": 0, 00:19:48.844 "io_queue_requests": 0, 00:19:48.844 "delay_cmd_submit": true, 00:19:48.844 "transport_retry_count": 4, 00:19:48.844 "bdev_retry_count": 3, 00:19:48.844 "transport_ack_timeout": 0, 00:19:48.844 "ctrlr_loss_timeout_sec": 0, 00:19:48.844 "reconnect_delay_sec": 0, 00:19:48.844 "fast_io_fail_timeout_sec": 0, 00:19:48.844 "disable_auto_failback": false, 00:19:48.844 "generate_uuids": false, 00:19:48.844 "transport_tos": 0, 00:19:48.844 "nvme_error_stat": false, 00:19:48.845 "rdma_srq_size": 0, 00:19:48.845 "io_path_stat": false, 00:19:48.845 "allow_accel_sequence": false, 00:19:48.845 "rdma_max_cq_size": 0, 00:19:48.845 "rdma_cm_event_timeout_ms": 0, 00:19:48.845 "dhchap_digests": [ 00:19:48.845 "sha256", 00:19:48.845 "sha384", 00:19:48.845 "sha512" 00:19:48.845 ], 00:19:48.845 "dhchap_dhgroups": [ 00:19:48.845 "null", 00:19:48.845 "ffdhe2048", 00:19:48.845 "ffdhe3072", 00:19:48.845 "ffdhe4096", 00:19:48.845 "ffdhe6144", 00:19:48.845 "ffdhe8192" 00:19:48.845 ] 00:19:48.845 } 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "method": "bdev_nvme_set_hotplug", 00:19:48.845 "params": { 00:19:48.845 "period_us": 100000, 00:19:48.845 "enable": false 00:19:48.845 } 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "method": "bdev_malloc_create", 00:19:48.845 "params": { 00:19:48.845 "name": "malloc0", 00:19:48.845 "num_blocks": 8192, 00:19:48.845 "block_size": 4096, 00:19:48.845 "physical_block_size": 4096, 00:19:48.845 "uuid": "009861d3-7edb-4a5d-a31c-d313d9f8d5ab", 00:19:48.845 "optimal_io_boundary": 0, 00:19:48.845 "md_size": 0, 00:19:48.845 "dif_type": 0, 00:19:48.845 "dif_is_head_of_md": false, 00:19:48.845 "dif_pi_format": 0 00:19:48.845 } 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "method": "bdev_wait_for_examine" 00:19:48.845 } 00:19:48.845 ] 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "subsystem": "nbd", 00:19:48.845 "config": [] 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "subsystem": "scheduler", 00:19:48.845 "config": [ 00:19:48.845 { 00:19:48.845 "method": "framework_set_scheduler", 00:19:48.845 "params": { 00:19:48.845 "name": "static" 00:19:48.845 } 00:19:48.845 } 00:19:48.845 ] 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "subsystem": "nvmf", 00:19:48.845 "config": [ 00:19:48.845 { 00:19:48.845 "method": "nvmf_set_config", 00:19:48.845 "params": { 00:19:48.845 "discovery_filter": "match_any", 00:19:48.845 "admin_cmd_passthru": { 00:19:48.845 "identify_ctrlr": false 00:19:48.845 }, 00:19:48.845 "dhchap_digests": [ 00:19:48.845 "sha256", 00:19:48.845 "sha384", 00:19:48.845 "sha512" 00:19:48.845 ], 00:19:48.845 "dhchap_dhgroups": [ 00:19:48.845 "null", 00:19:48.845 "ffdhe2048", 00:19:48.845 "ffdhe3072", 00:19:48.845 "ffdhe4096", 00:19:48.845 "ffdhe6144", 00:19:48.845 "ffdhe8192" 00:19:48.845 ] 00:19:48.845 } 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "method": "nvmf_set_max_subsystems", 00:19:48.845 "params": { 00:19:48.845 "max_subsystems": 1024 00:19:48.845 } 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "method": "nvmf_set_crdt", 00:19:48.845 "params": { 00:19:48.845 "crdt1": 0, 00:19:48.845 "crdt2": 0, 00:19:48.845 "crdt3": 0 00:19:48.845 } 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "method": "nvmf_create_transport", 00:19:48.845 "params": { 00:19:48.845 "trtype": "TCP", 00:19:48.845 "max_queue_depth": 128, 00:19:48.845 "max_io_qpairs_per_ctrlr": 127, 00:19:48.845 "in_capsule_data_size": 4096, 00:19:48.845 "max_io_size": 131072, 00:19:48.845 "io_unit_size": 131072, 00:19:48.845 "max_aq_depth": 128, 00:19:48.845 "num_shared_buffers": 511, 00:19:48.845 "buf_cache_size": 4294967295, 00:19:48.845 "dif_insert_or_strip": false, 00:19:48.845 "zcopy": false, 00:19:48.845 "c2h_success": false, 00:19:48.845 "sock_priority": 0, 00:19:48.845 "abort_timeout_sec": 1, 00:19:48.845 "ack_timeout": 0, 00:19:48.845 "data_wr_pool_size": 0 00:19:48.845 } 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "method": "nvmf_create_subsystem", 00:19:48.845 "params": { 00:19:48.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.845 "allow_any_host": false, 00:19:48.845 "serial_number": "SPDK00000000000001", 00:19:48.845 "model_number": "SPDK bdev Controller", 00:19:48.845 "max_namespaces": 10, 00:19:48.845 "min_cntlid": 1, 00:19:48.845 "max_cntlid": 65519, 00:19:48.845 "ana_reporting": false 00:19:48.845 } 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "method": "nvmf_subsystem_add_host", 00:19:48.845 "params": { 00:19:48.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.845 "host": "nqn.2016-06.io.spdk:host1", 00:19:48.845 "psk": "key0" 00:19:48.845 } 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "method": "nvmf_subsystem_add_ns", 00:19:48.845 "params": { 00:19:48.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.845 "namespace": { 00:19:48.845 "nsid": 1, 00:19:48.845 "bdev_name": "malloc0", 00:19:48.845 "nguid": "009861D37EDB4A5DA31CD313D9F8D5AB", 00:19:48.845 "uuid": "009861d3-7edb-4a5d-a31c-d313d9f8d5ab", 00:19:48.845 "no_auto_visible": false 00:19:48.845 } 00:19:48.845 } 00:19:48.845 }, 00:19:48.845 { 00:19:48.845 "method": "nvmf_subsystem_add_listener", 00:19:48.845 "params": { 00:19:48.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.845 "listen_address": { 00:19:48.845 "trtype": "TCP", 00:19:48.845 "adrfam": "IPv4", 00:19:48.845 "traddr": "10.0.0.2", 00:19:48.845 "trsvcid": "4420" 00:19:48.845 }, 00:19:48.845 "secure_channel": true 00:19:48.845 } 00:19:48.845 } 00:19:48.845 ] 00:19:48.845 } 00:19:48.845 ] 00:19:48.845 }' 00:19:48.845 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:49.106 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:49.106 "subsystems": [ 00:19:49.106 { 00:19:49.106 "subsystem": "keyring", 00:19:49.106 "config": [ 00:19:49.106 { 00:19:49.106 "method": "keyring_file_add_key", 00:19:49.106 "params": { 00:19:49.106 "name": "key0", 00:19:49.106 "path": "/tmp/tmp.rBZjjuoquE" 00:19:49.106 } 00:19:49.106 } 00:19:49.106 ] 00:19:49.106 }, 00:19:49.106 { 00:19:49.106 "subsystem": "iobuf", 00:19:49.106 "config": [ 00:19:49.106 { 00:19:49.106 "method": "iobuf_set_options", 00:19:49.106 "params": { 00:19:49.106 "small_pool_count": 8192, 00:19:49.106 "large_pool_count": 1024, 00:19:49.106 "small_bufsize": 8192, 00:19:49.106 "large_bufsize": 135168, 00:19:49.106 "enable_numa": false 00:19:49.106 } 00:19:49.106 } 00:19:49.106 ] 00:19:49.106 }, 00:19:49.106 { 00:19:49.106 "subsystem": "sock", 00:19:49.106 "config": [ 00:19:49.106 { 00:19:49.106 "method": "sock_set_default_impl", 00:19:49.106 "params": { 00:19:49.106 "impl_name": "posix" 00:19:49.106 } 00:19:49.106 }, 00:19:49.106 { 00:19:49.106 "method": "sock_impl_set_options", 00:19:49.106 "params": { 00:19:49.106 "impl_name": "ssl", 00:19:49.106 "recv_buf_size": 4096, 00:19:49.106 "send_buf_size": 4096, 00:19:49.106 "enable_recv_pipe": true, 00:19:49.106 "enable_quickack": false, 00:19:49.106 "enable_placement_id": 0, 00:19:49.106 "enable_zerocopy_send_server": true, 00:19:49.106 "enable_zerocopy_send_client": false, 00:19:49.106 "zerocopy_threshold": 0, 00:19:49.106 "tls_version": 0, 00:19:49.106 "enable_ktls": false 00:19:49.106 } 00:19:49.106 }, 00:19:49.106 { 00:19:49.106 "method": "sock_impl_set_options", 00:19:49.106 "params": { 00:19:49.106 "impl_name": "posix", 00:19:49.106 "recv_buf_size": 2097152, 00:19:49.106 "send_buf_size": 2097152, 00:19:49.106 "enable_recv_pipe": true, 00:19:49.106 "enable_quickack": false, 00:19:49.106 "enable_placement_id": 0, 00:19:49.106 "enable_zerocopy_send_server": true, 00:19:49.106 "enable_zerocopy_send_client": false, 00:19:49.106 "zerocopy_threshold": 0, 00:19:49.106 "tls_version": 0, 00:19:49.106 "enable_ktls": false 00:19:49.106 } 00:19:49.106 } 00:19:49.106 ] 00:19:49.106 }, 00:19:49.106 { 00:19:49.106 "subsystem": "vmd", 00:19:49.106 "config": [] 00:19:49.106 }, 00:19:49.106 { 00:19:49.106 "subsystem": "accel", 00:19:49.106 "config": [ 00:19:49.106 { 00:19:49.106 "method": "accel_set_options", 00:19:49.106 "params": { 00:19:49.106 "small_cache_size": 128, 00:19:49.106 "large_cache_size": 16, 00:19:49.106 "task_count": 2048, 00:19:49.106 "sequence_count": 2048, 00:19:49.106 "buf_count": 2048 00:19:49.106 } 00:19:49.106 } 00:19:49.106 ] 00:19:49.106 }, 00:19:49.106 { 00:19:49.106 "subsystem": "bdev", 00:19:49.106 "config": [ 00:19:49.106 { 00:19:49.106 "method": "bdev_set_options", 00:19:49.106 "params": { 00:19:49.106 "bdev_io_pool_size": 65535, 00:19:49.106 "bdev_io_cache_size": 256, 00:19:49.106 "bdev_auto_examine": true, 00:19:49.106 "iobuf_small_cache_size": 128, 00:19:49.106 "iobuf_large_cache_size": 16 00:19:49.106 } 00:19:49.106 }, 00:19:49.106 { 00:19:49.106 "method": "bdev_raid_set_options", 00:19:49.106 "params": { 00:19:49.106 "process_window_size_kb": 1024, 00:19:49.106 "process_max_bandwidth_mb_sec": 0 00:19:49.106 } 00:19:49.106 }, 00:19:49.106 { 00:19:49.106 "method": "bdev_iscsi_set_options", 00:19:49.106 "params": { 00:19:49.106 "timeout_sec": 30 00:19:49.106 } 00:19:49.106 }, 00:19:49.106 { 00:19:49.106 "method": "bdev_nvme_set_options", 00:19:49.106 "params": { 00:19:49.106 "action_on_timeout": "none", 00:19:49.107 "timeout_us": 0, 00:19:49.107 "timeout_admin_us": 0, 00:19:49.107 "keep_alive_timeout_ms": 10000, 00:19:49.107 "arbitration_burst": 0, 00:19:49.107 "low_priority_weight": 0, 00:19:49.107 "medium_priority_weight": 0, 00:19:49.107 "high_priority_weight": 0, 00:19:49.107 "nvme_adminq_poll_period_us": 10000, 00:19:49.107 "nvme_ioq_poll_period_us": 0, 00:19:49.107 "io_queue_requests": 512, 00:19:49.107 "delay_cmd_submit": true, 00:19:49.107 "transport_retry_count": 4, 00:19:49.107 "bdev_retry_count": 3, 00:19:49.107 "transport_ack_timeout": 0, 00:19:49.107 "ctrlr_loss_timeout_sec": 0, 00:19:49.107 "reconnect_delay_sec": 0, 00:19:49.107 "fast_io_fail_timeout_sec": 0, 00:19:49.107 "disable_auto_failback": false, 00:19:49.107 "generate_uuids": false, 00:19:49.107 "transport_tos": 0, 00:19:49.107 "nvme_error_stat": false, 00:19:49.107 "rdma_srq_size": 0, 00:19:49.107 "io_path_stat": false, 00:19:49.107 "allow_accel_sequence": false, 00:19:49.107 "rdma_max_cq_size": 0, 00:19:49.107 "rdma_cm_event_timeout_ms": 0, 00:19:49.107 "dhchap_digests": [ 00:19:49.107 "sha256", 00:19:49.107 "sha384", 00:19:49.107 "sha512" 00:19:49.107 ], 00:19:49.107 "dhchap_dhgroups": [ 00:19:49.107 "null", 00:19:49.107 "ffdhe2048", 00:19:49.107 "ffdhe3072", 00:19:49.107 "ffdhe4096", 00:19:49.107 "ffdhe6144", 00:19:49.107 "ffdhe8192" 00:19:49.107 ] 00:19:49.107 } 00:19:49.107 }, 00:19:49.107 { 00:19:49.107 "method": "bdev_nvme_attach_controller", 00:19:49.107 "params": { 00:19:49.107 "name": "TLSTEST", 00:19:49.107 "trtype": "TCP", 00:19:49.107 "adrfam": "IPv4", 00:19:49.107 "traddr": "10.0.0.2", 00:19:49.107 "trsvcid": "4420", 00:19:49.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.107 "prchk_reftag": false, 00:19:49.107 "prchk_guard": false, 00:19:49.107 "ctrlr_loss_timeout_sec": 0, 00:19:49.107 "reconnect_delay_sec": 0, 00:19:49.107 "fast_io_fail_timeout_sec": 0, 00:19:49.107 "psk": "key0", 00:19:49.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.107 "hdgst": false, 00:19:49.107 "ddgst": false, 00:19:49.107 "multipath": "multipath" 00:19:49.107 } 00:19:49.107 }, 00:19:49.107 { 00:19:49.107 "method": "bdev_nvme_set_hotplug", 00:19:49.107 "params": { 00:19:49.107 "period_us": 100000, 00:19:49.107 "enable": false 00:19:49.107 } 00:19:49.107 }, 00:19:49.107 { 00:19:49.107 "method": "bdev_wait_for_examine" 00:19:49.107 } 00:19:49.107 ] 00:19:49.107 }, 00:19:49.107 { 00:19:49.107 "subsystem": "nbd", 00:19:49.107 "config": [] 00:19:49.107 } 00:19:49.107 ] 00:19:49.107 }' 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 713069 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 713069 ']' 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 713069 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 713069 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 713069' 00:19:49.107 killing process with pid 713069 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 713069 00:19:49.107 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.107 00:19:49.107 Latency(us) 00:19:49.107 [2024-11-20T08:05:14.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.107 [2024-11-20T08:05:14.636Z] =================================================================================================================== 00:19:49.107 [2024-11-20T08:05:14.636Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 713069 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 712706 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 712706 ']' 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 712706 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.107 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 712706 00:19:49.369 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:49.369 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:49.369 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 712706' 00:19:49.369 killing process with pid 712706 00:19:49.369 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 712706 00:19:49.369 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 712706 00:19:49.369 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:49.369 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:49.369 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.369 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.369 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:49.369 "subsystems": [ 00:19:49.369 { 00:19:49.369 "subsystem": "keyring", 00:19:49.369 "config": [ 00:19:49.369 { 00:19:49.369 "method": "keyring_file_add_key", 00:19:49.369 "params": { 00:19:49.369 "name": "key0", 00:19:49.369 "path": "/tmp/tmp.rBZjjuoquE" 00:19:49.369 } 00:19:49.369 } 00:19:49.369 ] 00:19:49.369 }, 00:19:49.369 { 00:19:49.369 "subsystem": "iobuf", 00:19:49.369 "config": [ 00:19:49.369 { 00:19:49.369 "method": "iobuf_set_options", 00:19:49.369 "params": { 00:19:49.369 "small_pool_count": 8192, 00:19:49.369 "large_pool_count": 1024, 00:19:49.369 "small_bufsize": 8192, 00:19:49.369 "large_bufsize": 135168, 00:19:49.369 "enable_numa": false 00:19:49.369 } 00:19:49.369 } 00:19:49.369 ] 00:19:49.369 }, 00:19:49.369 { 00:19:49.369 "subsystem": "sock", 00:19:49.369 "config": [ 00:19:49.369 { 00:19:49.369 "method": "sock_set_default_impl", 00:19:49.369 "params": { 00:19:49.369 "impl_name": "posix" 00:19:49.369 } 00:19:49.369 }, 00:19:49.369 { 00:19:49.369 "method": "sock_impl_set_options", 00:19:49.369 "params": { 00:19:49.369 "impl_name": "ssl", 00:19:49.369 "recv_buf_size": 4096, 00:19:49.369 "send_buf_size": 4096, 00:19:49.369 "enable_recv_pipe": true, 00:19:49.369 "enable_quickack": false, 00:19:49.369 "enable_placement_id": 0, 00:19:49.369 "enable_zerocopy_send_server": true, 00:19:49.369 "enable_zerocopy_send_client": false, 00:19:49.369 "zerocopy_threshold": 0, 00:19:49.369 "tls_version": 0, 00:19:49.369 "enable_ktls": false 00:19:49.369 } 00:19:49.369 }, 00:19:49.369 { 00:19:49.369 "method": "sock_impl_set_options", 00:19:49.369 "params": { 00:19:49.369 "impl_name": "posix", 00:19:49.369 "recv_buf_size": 2097152, 00:19:49.369 "send_buf_size": 2097152, 00:19:49.369 "enable_recv_pipe": true, 00:19:49.369 "enable_quickack": false, 00:19:49.369 "enable_placement_id": 0, 00:19:49.369 "enable_zerocopy_send_server": true, 00:19:49.369 "enable_zerocopy_send_client": false, 00:19:49.369 "zerocopy_threshold": 0, 00:19:49.369 "tls_version": 0, 00:19:49.369 "enable_ktls": false 00:19:49.369 } 00:19:49.369 } 00:19:49.369 ] 00:19:49.369 }, 00:19:49.369 { 00:19:49.369 "subsystem": "vmd", 00:19:49.369 "config": [] 00:19:49.369 }, 00:19:49.369 { 00:19:49.369 "subsystem": "accel", 00:19:49.369 "config": [ 00:19:49.369 { 00:19:49.369 "method": "accel_set_options", 00:19:49.369 "params": { 00:19:49.369 "small_cache_size": 128, 00:19:49.369 "large_cache_size": 16, 00:19:49.369 "task_count": 2048, 00:19:49.369 "sequence_count": 2048, 00:19:49.369 "buf_count": 2048 00:19:49.369 } 00:19:49.369 } 00:19:49.369 ] 00:19:49.369 }, 00:19:49.369 { 00:19:49.369 "subsystem": "bdev", 00:19:49.369 "config": [ 00:19:49.369 { 00:19:49.369 "method": "bdev_set_options", 00:19:49.369 "params": { 00:19:49.369 "bdev_io_pool_size": 65535, 00:19:49.369 "bdev_io_cache_size": 256, 00:19:49.369 "bdev_auto_examine": true, 00:19:49.369 "iobuf_small_cache_size": 128, 00:19:49.369 "iobuf_large_cache_size": 16 00:19:49.369 } 00:19:49.369 }, 00:19:49.369 { 00:19:49.369 "method": "bdev_raid_set_options", 00:19:49.369 "params": { 00:19:49.369 "process_window_size_kb": 1024, 00:19:49.369 "process_max_bandwidth_mb_sec": 0 00:19:49.369 } 00:19:49.369 }, 00:19:49.369 { 00:19:49.369 "method": "bdev_iscsi_set_options", 00:19:49.369 "params": { 00:19:49.369 "timeout_sec": 30 00:19:49.369 } 00:19:49.369 }, 00:19:49.369 { 00:19:49.369 "method": "bdev_nvme_set_options", 00:19:49.369 "params": { 00:19:49.370 "action_on_timeout": "none", 00:19:49.370 "timeout_us": 0, 00:19:49.370 "timeout_admin_us": 0, 00:19:49.370 "keep_alive_timeout_ms": 10000, 00:19:49.370 "arbitration_burst": 0, 00:19:49.370 "low_priority_weight": 0, 00:19:49.370 "medium_priority_weight": 0, 00:19:49.370 "high_priority_weight": 0, 00:19:49.370 "nvme_adminq_poll_period_us": 10000, 00:19:49.370 "nvme_ioq_poll_period_us": 0, 00:19:49.370 "io_queue_requests": 0, 00:19:49.370 "delay_cmd_submit": true, 00:19:49.370 "transport_retry_count": 4, 00:19:49.370 "bdev_retry_count": 3, 00:19:49.370 "transport_ack_timeout": 0, 00:19:49.370 "ctrlr_loss_timeout_sec": 0, 00:19:49.370 "reconnect_delay_sec": 0, 00:19:49.370 "fast_io_fail_timeout_sec": 0, 00:19:49.370 "disable_auto_failback": false, 00:19:49.370 "generate_uuids": false, 00:19:49.370 "transport_tos": 0, 00:19:49.370 "nvme_error_stat": false, 00:19:49.370 "rdma_srq_size": 0, 00:19:49.370 "io_path_stat": false, 00:19:49.370 "allow_accel_sequence": false, 00:19:49.370 "rdma_max_cq_size": 0, 00:19:49.370 "rdma_cm_event_timeout_ms": 0, 00:19:49.370 "dhchap_digests": [ 00:19:49.370 "sha256", 00:19:49.370 "sha384", 00:19:49.370 "sha512" 00:19:49.370 ], 00:19:49.370 "dhchap_dhgroups": [ 00:19:49.370 "null", 00:19:49.370 "ffdhe2048", 00:19:49.370 "ffdhe3072", 00:19:49.370 "ffdhe4096", 00:19:49.370 "ffdhe6144", 00:19:49.370 "ffdhe8192" 00:19:49.370 ] 00:19:49.370 } 00:19:49.370 }, 00:19:49.370 { 00:19:49.370 "method": "bdev_nvme_set_hotplug", 00:19:49.370 "params": { 00:19:49.370 "period_us": 100000, 00:19:49.370 "enable": false 00:19:49.370 } 00:19:49.370 }, 00:19:49.370 { 00:19:49.370 "method": "bdev_malloc_create", 00:19:49.370 "params": { 00:19:49.370 "name": "malloc0", 00:19:49.370 "num_blocks": 8192, 00:19:49.370 "block_size": 4096, 00:19:49.370 "physical_block_size": 4096, 00:19:49.370 "uuid": "009861d3-7edb-4a5d-a31c-d313d9f8d5ab", 00:19:49.370 "optimal_io_boundary": 0, 00:19:49.370 "md_size": 0, 00:19:49.370 "dif_type": 0, 00:19:49.370 "dif_is_head_of_md": false, 00:19:49.370 "dif_pi_format": 0 00:19:49.370 } 00:19:49.370 }, 00:19:49.370 { 00:19:49.370 "method": "bdev_wait_for_examine" 00:19:49.370 } 00:19:49.370 ] 00:19:49.370 }, 00:19:49.370 { 00:19:49.370 "subsystem": "nbd", 00:19:49.370 "config": [] 00:19:49.370 }, 00:19:49.370 { 00:19:49.370 "subsystem": "scheduler", 00:19:49.370 "config": [ 00:19:49.370 { 00:19:49.370 "method": "framework_set_scheduler", 00:19:49.370 "params": { 00:19:49.370 "name": "static" 00:19:49.370 } 00:19:49.370 } 00:19:49.370 ] 00:19:49.370 }, 00:19:49.370 { 00:19:49.370 "subsystem": "nvmf", 00:19:49.370 "config": [ 00:19:49.370 { 00:19:49.370 "method": "nvmf_set_config", 00:19:49.370 "params": { 00:19:49.370 "discovery_filter": "match_any", 00:19:49.370 "admin_cmd_passthru": { 00:19:49.370 "identify_ctrlr": false 00:19:49.370 }, 00:19:49.370 "dhchap_digests": [ 00:19:49.370 "sha256", 00:19:49.370 "sha384", 00:19:49.370 "sha512" 00:19:49.370 ], 00:19:49.370 "dhchap_dhgroups": [ 00:19:49.370 "null", 00:19:49.370 "ffdhe2048", 00:19:49.370 "ffdhe3072", 00:19:49.370 "ffdhe4096", 00:19:49.370 "ffdhe6144", 00:19:49.370 "ffdhe8192" 00:19:49.370 ] 00:19:49.370 } 00:19:49.370 }, 00:19:49.370 { 00:19:49.370 "method": "nvmf_set_max_subsystems", 00:19:49.370 "params": { 00:19:49.370 "max_subsystems": 1024 00:19:49.370 } 00:19:49.370 }, 00:19:49.370 { 00:19:49.370 "method": "nvmf_set_crdt", 00:19:49.370 "params": { 00:19:49.370 "crdt1": 0, 00:19:49.370 "crdt2": 0, 00:19:49.370 "crdt3": 0 00:19:49.370 } 00:19:49.370 }, 00:19:49.370 { 00:19:49.370 "method": "nvmf_create_transport", 00:19:49.370 "params": { 00:19:49.370 "trtype": "TCP", 00:19:49.370 "max_queue_depth": 128, 00:19:49.370 "max_io_qpairs_per_ctrlr": 127, 00:19:49.370 "in_capsule_data_size": 4096, 00:19:49.370 "max_io_size": 131072, 00:19:49.370 "io_unit_size": 131072, 00:19:49.370 "max_aq_depth": 128, 00:19:49.370 "num_shared_buffers": 511, 00:19:49.370 "buf_cache_size": 4294967295, 00:19:49.370 "dif_insert_or_strip": false, 00:19:49.370 "zcopy": false, 00:19:49.370 "c2h_success": false, 00:19:49.370 "sock_priority": 0, 00:19:49.370 "abort_timeout_sec": 1, 00:19:49.370 "ack_timeout": 0, 00:19:49.370 "data_wr_pool_size": 0 00:19:49.370 } 00:19:49.370 }, 00:19:49.370 { 00:19:49.370 "method": "nvmf_create_subsystem", 00:19:49.370 "params": { 00:19:49.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.370 "allow_any_host": false, 00:19:49.370 "serial_number": "SPDK00000000000001", 00:19:49.370 "model_number": "SPDK bdev Controller", 00:19:49.370 "max_namespaces": 10, 00:19:49.370 "min_cntlid": 1, 00:19:49.370 "max_cntlid": 65519, 00:19:49.370 "ana_reporting": false 00:19:49.370 } 00:19:49.370 }, 00:19:49.370 { 00:19:49.370 "method": "nvmf_subsystem_add_host", 00:19:49.370 "params": { 00:19:49.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.371 "host": "nqn.2016-06.io.spdk:host1", 00:19:49.371 "psk": "key0" 00:19:49.371 } 00:19:49.371 }, 00:19:49.371 { 00:19:49.371 "method": "nvmf_subsystem_add_ns", 00:19:49.371 "params": { 00:19:49.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.371 "namespace": { 00:19:49.371 "nsid": 1, 00:19:49.371 "bdev_name": "malloc0", 00:19:49.371 "nguid": "009861D37EDB4A5DA31CD313D9F8D5AB", 00:19:49.371 "uuid": "009861d3-7edb-4a5d-a31c-d313d9f8d5ab", 00:19:49.371 "no_auto_visible": false 00:19:49.371 } 00:19:49.371 } 00:19:49.371 }, 00:19:49.371 { 00:19:49.371 "method": "nvmf_subsystem_add_listener", 00:19:49.371 "params": { 00:19:49.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.371 "listen_address": { 00:19:49.371 "trtype": "TCP", 00:19:49.371 "adrfam": "IPv4", 00:19:49.371 "traddr": "10.0.0.2", 00:19:49.371 "trsvcid": "4420" 00:19:49.371 }, 00:19:49.371 "secure_channel": true 00:19:49.371 } 00:19:49.371 } 00:19:49.371 ] 00:19:49.371 } 00:19:49.371 ] 00:19:49.371 }' 00:19:49.371 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=713435 00:19:49.371 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 713435 00:19:49.371 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:49.371 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 713435 ']' 00:19:49.371 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.371 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.371 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.371 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.371 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.371 [2024-11-20 09:05:14.851961] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:49.371 [2024-11-20 09:05:14.852023] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.632 [2024-11-20 09:05:14.943332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.632 [2024-11-20 09:05:14.972858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.632 [2024-11-20 09:05:14.972886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.632 [2024-11-20 09:05:14.972892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.632 [2024-11-20 09:05:14.972897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.632 [2024-11-20 09:05:14.972901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.632 [2024-11-20 09:05:14.973390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.892 [2024-11-20 09:05:15.165767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.893 [2024-11-20 09:05:15.197790] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.893 [2024-11-20 09:05:15.197989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=713774 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 713774 /var/tmp/bdevperf.sock 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 713774 ']' 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.153 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.413 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.413 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:50.413 "subsystems": [ 00:19:50.413 { 00:19:50.413 "subsystem": "keyring", 00:19:50.413 "config": [ 00:19:50.413 { 00:19:50.413 "method": "keyring_file_add_key", 00:19:50.413 "params": { 00:19:50.413 "name": "key0", 00:19:50.413 "path": "/tmp/tmp.rBZjjuoquE" 00:19:50.413 } 00:19:50.413 } 00:19:50.413 ] 00:19:50.413 }, 00:19:50.413 { 00:19:50.413 "subsystem": "iobuf", 00:19:50.413 "config": [ 00:19:50.413 { 00:19:50.413 "method": "iobuf_set_options", 00:19:50.413 "params": { 00:19:50.413 "small_pool_count": 8192, 00:19:50.413 "large_pool_count": 1024, 00:19:50.413 "small_bufsize": 8192, 00:19:50.413 "large_bufsize": 135168, 00:19:50.413 "enable_numa": false 00:19:50.413 } 00:19:50.413 } 00:19:50.413 ] 00:19:50.413 }, 00:19:50.413 { 00:19:50.413 "subsystem": "sock", 00:19:50.413 "config": [ 00:19:50.413 { 00:19:50.413 "method": "sock_set_default_impl", 00:19:50.413 "params": { 00:19:50.413 "impl_name": "posix" 00:19:50.413 } 00:19:50.413 }, 00:19:50.413 { 00:19:50.413 "method": "sock_impl_set_options", 00:19:50.414 "params": { 00:19:50.414 "impl_name": "ssl", 00:19:50.414 "recv_buf_size": 4096, 00:19:50.414 "send_buf_size": 4096, 00:19:50.414 "enable_recv_pipe": true, 00:19:50.414 "enable_quickack": false, 00:19:50.414 "enable_placement_id": 0, 00:19:50.414 "enable_zerocopy_send_server": true, 00:19:50.414 "enable_zerocopy_send_client": false, 00:19:50.414 "zerocopy_threshold": 0, 00:19:50.414 "tls_version": 0, 00:19:50.414 "enable_ktls": false 00:19:50.414 } 00:19:50.414 }, 00:19:50.414 { 00:19:50.414 "method": "sock_impl_set_options", 00:19:50.414 "params": { 00:19:50.414 "impl_name": "posix", 00:19:50.414 "recv_buf_size": 2097152, 00:19:50.414 "send_buf_size": 2097152, 00:19:50.414 "enable_recv_pipe": true, 00:19:50.414 "enable_quickack": false, 00:19:50.414 "enable_placement_id": 0, 00:19:50.414 "enable_zerocopy_send_server": true, 00:19:50.414 "enable_zerocopy_send_client": false, 00:19:50.414 "zerocopy_threshold": 0, 00:19:50.414 "tls_version": 0, 00:19:50.414 "enable_ktls": false 00:19:50.414 } 00:19:50.414 } 00:19:50.414 ] 00:19:50.414 }, 00:19:50.414 { 00:19:50.414 "subsystem": "vmd", 00:19:50.414 "config": [] 00:19:50.414 }, 00:19:50.414 { 00:19:50.414 "subsystem": "accel", 00:19:50.414 "config": [ 00:19:50.414 { 00:19:50.414 "method": "accel_set_options", 00:19:50.414 "params": { 00:19:50.414 "small_cache_size": 128, 00:19:50.414 "large_cache_size": 16, 00:19:50.414 "task_count": 2048, 00:19:50.414 "sequence_count": 2048, 00:19:50.414 "buf_count": 2048 00:19:50.414 } 00:19:50.414 } 00:19:50.414 ] 00:19:50.414 }, 00:19:50.414 { 00:19:50.414 "subsystem": "bdev", 00:19:50.414 "config": [ 00:19:50.414 { 00:19:50.414 "method": "bdev_set_options", 00:19:50.414 "params": { 00:19:50.414 "bdev_io_pool_size": 65535, 00:19:50.414 "bdev_io_cache_size": 256, 00:19:50.414 "bdev_auto_examine": true, 00:19:50.414 "iobuf_small_cache_size": 128, 00:19:50.414 "iobuf_large_cache_size": 16 00:19:50.414 } 00:19:50.414 }, 00:19:50.414 { 00:19:50.414 "method": "bdev_raid_set_options", 00:19:50.414 "params": { 00:19:50.414 "process_window_size_kb": 1024, 00:19:50.414 "process_max_bandwidth_mb_sec": 0 00:19:50.414 } 00:19:50.414 }, 00:19:50.414 { 00:19:50.414 "method": "bdev_iscsi_set_options", 00:19:50.414 "params": { 00:19:50.414 "timeout_sec": 30 00:19:50.414 } 00:19:50.414 }, 00:19:50.414 { 00:19:50.414 "method": "bdev_nvme_set_options", 00:19:50.414 "params": { 00:19:50.414 "action_on_timeout": "none", 00:19:50.414 "timeout_us": 0, 00:19:50.414 "timeout_admin_us": 0, 00:19:50.414 "keep_alive_timeout_ms": 10000, 00:19:50.414 "arbitration_burst": 0, 00:19:50.414 "low_priority_weight": 0, 00:19:50.414 "medium_priority_weight": 0, 00:19:50.414 "high_priority_weight": 0, 00:19:50.414 "nvme_adminq_poll_period_us": 10000, 00:19:50.414 "nvme_ioq_poll_period_us": 0, 00:19:50.414 "io_queue_requests": 512, 00:19:50.414 "delay_cmd_submit": true, 00:19:50.414 "transport_retry_count": 4, 00:19:50.414 "bdev_retry_count": 3, 00:19:50.414 "transport_ack_timeout": 0, 00:19:50.414 "ctrlr_loss_timeout_sec": 0, 00:19:50.414 "reconnect_delay_sec": 0, 00:19:50.414 "fast_io_fail_timeout_sec": 0, 00:19:50.414 "disable_auto_failback": false, 00:19:50.414 "generate_uuids": false, 00:19:50.414 "transport_tos": 0, 00:19:50.414 "nvme_error_stat": false, 00:19:50.414 "rdma_srq_size": 0, 00:19:50.414 "io_path_stat": false, 00:19:50.414 "allow_accel_sequence": false, 00:19:50.414 "rdma_max_cq_size": 0, 00:19:50.414 "rdma_cm_event_timeout_ms": 0, 00:19:50.414 "dhchap_digests": [ 00:19:50.414 "sha256", 00:19:50.414 "sha384", 00:19:50.414 "sha512" 00:19:50.414 ], 00:19:50.414 "dhchap_dhgroups": [ 00:19:50.414 "null", 00:19:50.414 "ffdhe2048", 00:19:50.414 "ffdhe3072", 00:19:50.414 "ffdhe4096", 00:19:50.414 "ffdhe6144", 00:19:50.414 "ffdhe8192" 00:19:50.414 ] 00:19:50.414 } 00:19:50.414 }, 00:19:50.414 { 00:19:50.414 "method": "bdev_nvme_attach_controller", 00:19:50.414 "params": { 00:19:50.414 "name": "TLSTEST", 00:19:50.414 "trtype": "TCP", 00:19:50.414 "adrfam": "IPv4", 00:19:50.414 "traddr": "10.0.0.2", 00:19:50.414 "trsvcid": "4420", 00:19:50.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.414 "prchk_reftag": false, 00:19:50.414 "prchk_guard": false, 00:19:50.414 "ctrlr_loss_timeout_sec": 0, 00:19:50.414 "reconnect_delay_sec": 0, 00:19:50.414 "fast_io_fail_timeout_sec": 0, 00:19:50.414 "psk": "key0", 00:19:50.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.414 "hdgst": false, 00:19:50.414 "ddgst": false, 00:19:50.414 "multipath": "multipath" 00:19:50.414 } 00:19:50.414 }, 00:19:50.414 { 00:19:50.414 "method": "bdev_nvme_set_hotplug", 00:19:50.414 "params": { 00:19:50.414 "period_us": 100000, 00:19:50.414 "enable": false 00:19:50.414 } 00:19:50.414 }, 00:19:50.414 { 00:19:50.414 "method": "bdev_wait_for_examine" 00:19:50.414 } 00:19:50.414 ] 00:19:50.414 }, 00:19:50.414 { 00:19:50.414 "subsystem": "nbd", 00:19:50.414 "config": [] 00:19:50.414 } 00:19:50.414 ] 00:19:50.414 }' 00:19:50.414 [2024-11-20 09:05:15.727348] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:19:50.414 [2024-11-20 09:05:15.727399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid713774 ] 00:19:50.414 [2024-11-20 09:05:15.817172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.414 [2024-11-20 09:05:15.852292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.675 [2024-11-20 09:05:15.991533] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.247 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.247 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:51.248 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:51.248 Running I/O for 10 seconds... 00:19:53.131 4705.00 IOPS, 18.38 MiB/s [2024-11-20T08:05:20.043Z] 5307.50 IOPS, 20.73 MiB/s [2024-11-20T08:05:20.986Z] 5507.00 IOPS, 21.51 MiB/s [2024-11-20T08:05:21.930Z] 5652.50 IOPS, 22.08 MiB/s [2024-11-20T08:05:22.873Z] 5596.80 IOPS, 21.86 MiB/s [2024-11-20T08:05:23.814Z] 5578.17 IOPS, 21.79 MiB/s [2024-11-20T08:05:24.757Z] 5713.29 IOPS, 22.32 MiB/s [2024-11-20T08:05:25.699Z] 5762.12 IOPS, 22.51 MiB/s [2024-11-20T08:05:26.690Z] 5737.33 IOPS, 22.41 MiB/s [2024-11-20T08:05:27.022Z] 5677.10 IOPS, 22.18 MiB/s 00:20:01.493 Latency(us) 00:20:01.493 [2024-11-20T08:05:27.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.493 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:01.493 Verification LBA range: start 0x0 length 0x2000 00:20:01.493 TLSTESTn1 : 10.04 5665.80 22.13 0.00 0.00 22526.39 6089.39 39976.96 00:20:01.493 [2024-11-20T08:05:27.022Z] =================================================================================================================== 00:20:01.493 [2024-11-20T08:05:27.023Z] Total : 5665.80 22.13 0.00 0.00 22526.39 6089.39 39976.96 00:20:01.494 { 00:20:01.494 "results": [ 00:20:01.494 { 00:20:01.494 "job": "TLSTESTn1", 00:20:01.494 "core_mask": "0x4", 00:20:01.494 "workload": "verify", 00:20:01.494 "status": "finished", 00:20:01.494 "verify_range": { 00:20:01.494 "start": 0, 00:20:01.494 "length": 8192 00:20:01.494 }, 00:20:01.494 "queue_depth": 128, 00:20:01.494 "io_size": 4096, 00:20:01.494 "runtime": 10.042363, 00:20:01.494 "iops": 5665.7979800172525, 00:20:01.494 "mibps": 22.132023359442393, 00:20:01.494 "io_failed": 0, 00:20:01.494 "io_timeout": 0, 00:20:01.494 "avg_latency_us": 22526.38889802805, 00:20:01.494 "min_latency_us": 6089.386666666666, 00:20:01.494 "max_latency_us": 39976.96 00:20:01.494 } 00:20:01.494 ], 00:20:01.494 "core_count": 1 00:20:01.494 } 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 713774 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 713774 ']' 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 713774 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 713774 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 713774' 00:20:01.494 killing process with pid 713774 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 713774 00:20:01.494 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.494 00:20:01.494 Latency(us) 00:20:01.494 [2024-11-20T08:05:27.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.494 [2024-11-20T08:05:27.023Z] =================================================================================================================== 00:20:01.494 [2024-11-20T08:05:27.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 713774 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 713435 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 713435 ']' 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 713435 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 713435 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 713435' 00:20:01.494 killing process with pid 713435 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 713435 00:20:01.494 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 713435 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=715829 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 715829 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 715829 ']' 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.782 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.782 [2024-11-20 09:05:27.109478] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:20:01.782 [2024-11-20 09:05:27.109534] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.782 [2024-11-20 09:05:27.203990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.782 [2024-11-20 09:05:27.241364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.782 [2024-11-20 09:05:27.241404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.782 [2024-11-20 09:05:27.241412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.782 [2024-11-20 09:05:27.241420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.782 [2024-11-20 09:05:27.241426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.782 [2024-11-20 09:05:27.242008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.732 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.732 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:02.732 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:02.732 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.732 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.732 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.732 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.rBZjjuoquE 00:20:02.732 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rBZjjuoquE 00:20:02.732 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:02.732 [2024-11-20 09:05:28.139879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.732 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:02.992 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.253 [2024-11-20 09:05:28.532870] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.253 [2024-11-20 09:05:28.533220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.253 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:03.253 malloc0 00:20:03.253 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:03.513 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rBZjjuoquE 00:20:03.773 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:04.034 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=716417 00:20:04.034 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:04.034 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.034 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 716417 /var/tmp/bdevperf.sock 00:20:04.034 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 716417 ']' 00:20:04.034 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.034 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.034 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.034 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.034 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.034 [2024-11-20 09:05:29.394673] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:20:04.034 [2024-11-20 09:05:29.394745] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716417 ] 00:20:04.034 [2024-11-20 09:05:29.480894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.034 [2024-11-20 09:05:29.510707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.975 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.976 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:04.976 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rBZjjuoquE 00:20:04.976 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:04.976 [2024-11-20 09:05:30.493997] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.236 nvme0n1 00:20:05.236 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.237 Running I/O for 1 seconds... 00:20:06.179 6004.00 IOPS, 23.45 MiB/s 00:20:06.179 Latency(us) 00:20:06.179 [2024-11-20T08:05:31.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.179 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:06.179 Verification LBA range: start 0x0 length 0x2000 00:20:06.179 nvme0n1 : 1.02 6038.49 23.59 0.00 0.00 21055.09 4724.05 25668.27 00:20:06.179 [2024-11-20T08:05:31.708Z] =================================================================================================================== 00:20:06.179 [2024-11-20T08:05:31.708Z] Total : 6038.49 23.59 0.00 0.00 21055.09 4724.05 25668.27 00:20:06.179 { 00:20:06.179 "results": [ 00:20:06.179 { 00:20:06.179 "job": "nvme0n1", 00:20:06.179 "core_mask": "0x2", 00:20:06.179 "workload": "verify", 00:20:06.179 "status": "finished", 00:20:06.179 "verify_range": { 00:20:06.179 "start": 0, 00:20:06.179 "length": 8192 00:20:06.179 }, 00:20:06.179 "queue_depth": 128, 00:20:06.179 "io_size": 4096, 00:20:06.179 "runtime": 1.015485, 00:20:06.179 "iops": 6038.493921623658, 00:20:06.179 "mibps": 23.587866881342414, 00:20:06.179 "io_failed": 0, 00:20:06.179 "io_timeout": 0, 00:20:06.179 "avg_latency_us": 21055.087662535334, 00:20:06.179 "min_latency_us": 4724.053333333333, 00:20:06.179 "max_latency_us": 25668.266666666666 00:20:06.179 } 00:20:06.179 ], 00:20:06.179 "core_count": 1 00:20:06.179 } 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 716417 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 716417 ']' 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 716417 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 716417 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 716417' 00:20:06.440 killing process with pid 716417 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 716417 00:20:06.440 Received shutdown signal, test time was about 1.000000 seconds 00:20:06.440 00:20:06.440 Latency(us) 00:20:06.440 [2024-11-20T08:05:31.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.440 [2024-11-20T08:05:31.969Z] =================================================================================================================== 00:20:06.440 [2024-11-20T08:05:31.969Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 716417 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 715829 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 715829 ']' 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 715829 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 715829 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 715829' 00:20:06.440 killing process with pid 715829 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 715829 00:20:06.440 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 715829 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=716856 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 716856 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 716856 ']' 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.701 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.701 [2024-11-20 09:05:32.145259] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:20:06.701 [2024-11-20 09:05:32.145318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.962 [2024-11-20 09:05:32.243970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.962 [2024-11-20 09:05:32.294617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.963 [2024-11-20 09:05:32.294677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.963 [2024-11-20 09:05:32.294685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.963 [2024-11-20 09:05:32.294693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.963 [2024-11-20 09:05:32.294699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.963 [2024-11-20 09:05:32.295504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.533 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.533 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:07.533 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.533 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.533 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.533 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.533 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:07.533 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.533 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.533 [2024-11-20 09:05:32.998729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.533 malloc0 00:20:07.533 [2024-11-20 09:05:33.025598] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.533 [2024-11-20 09:05:33.025816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.533 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.533 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=717202 00:20:07.533 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 717202 /var/tmp/bdevperf.sock 00:20:07.533 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:07.533 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 717202 ']' 00:20:07.533 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.533 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.533 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.533 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.533 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.793 [2024-11-20 09:05:33.112732] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:20:07.793 [2024-11-20 09:05:33.112785] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid717202 ] 00:20:07.793 [2024-11-20 09:05:33.197134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.793 [2024-11-20 09:05:33.226855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.733 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.733 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:08.733 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rBZjjuoquE 00:20:08.733 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:08.733 [2024-11-20 09:05:34.218077] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.994 nvme0n1 00:20:08.994 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.994 Running I/O for 1 seconds... 00:20:09.935 6304.00 IOPS, 24.62 MiB/s 00:20:09.935 Latency(us) 00:20:09.935 [2024-11-20T08:05:35.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.935 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:09.935 Verification LBA range: start 0x0 length 0x2000 00:20:09.935 nvme0n1 : 1.01 6344.97 24.79 0.00 0.00 20033.17 4614.83 36481.71 00:20:09.935 [2024-11-20T08:05:35.464Z] =================================================================================================================== 00:20:09.935 [2024-11-20T08:05:35.464Z] Total : 6344.97 24.79 0.00 0.00 20033.17 4614.83 36481.71 00:20:09.935 { 00:20:09.935 "results": [ 00:20:09.935 { 00:20:09.935 "job": "nvme0n1", 00:20:09.935 "core_mask": "0x2", 00:20:09.935 "workload": "verify", 00:20:09.935 "status": "finished", 00:20:09.935 "verify_range": { 00:20:09.935 "start": 0, 00:20:09.935 "length": 8192 00:20:09.935 }, 00:20:09.935 "queue_depth": 128, 00:20:09.935 "io_size": 4096, 00:20:09.935 "runtime": 1.013716, 00:20:09.935 "iops": 6344.972359122279, 00:20:09.935 "mibps": 24.785048277821403, 00:20:09.935 "io_failed": 0, 00:20:09.935 "io_timeout": 0, 00:20:09.935 "avg_latency_us": 20033.171741293532, 00:20:09.935 "min_latency_us": 4614.826666666667, 00:20:09.935 "max_latency_us": 36481.706666666665 00:20:09.935 } 00:20:09.935 ], 00:20:09.935 "core_count": 1 00:20:09.935 } 00:20:09.935 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:09.935 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.935 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.195 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.195 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:10.195 "subsystems": [ 00:20:10.195 { 00:20:10.195 "subsystem": "keyring", 00:20:10.195 "config": [ 00:20:10.195 { 00:20:10.195 "method": "keyring_file_add_key", 00:20:10.195 "params": { 00:20:10.195 "name": "key0", 00:20:10.195 "path": "/tmp/tmp.rBZjjuoquE" 00:20:10.195 } 00:20:10.195 } 00:20:10.195 ] 00:20:10.195 }, 00:20:10.195 { 00:20:10.195 "subsystem": "iobuf", 00:20:10.195 "config": [ 00:20:10.195 { 00:20:10.195 "method": "iobuf_set_options", 00:20:10.195 "params": { 00:20:10.195 "small_pool_count": 8192, 00:20:10.195 "large_pool_count": 1024, 00:20:10.195 "small_bufsize": 8192, 00:20:10.195 "large_bufsize": 135168, 00:20:10.195 "enable_numa": false 00:20:10.195 } 00:20:10.195 } 00:20:10.195 ] 00:20:10.195 }, 00:20:10.195 { 00:20:10.195 "subsystem": "sock", 00:20:10.195 "config": [ 00:20:10.195 { 00:20:10.195 "method": "sock_set_default_impl", 00:20:10.195 "params": { 00:20:10.195 "impl_name": "posix" 00:20:10.195 } 00:20:10.195 }, 00:20:10.195 { 00:20:10.195 "method": "sock_impl_set_options", 00:20:10.195 "params": { 00:20:10.195 "impl_name": "ssl", 00:20:10.195 "recv_buf_size": 4096, 00:20:10.195 "send_buf_size": 4096, 00:20:10.195 "enable_recv_pipe": true, 00:20:10.195 "enable_quickack": false, 00:20:10.195 "enable_placement_id": 0, 00:20:10.196 "enable_zerocopy_send_server": true, 00:20:10.196 "enable_zerocopy_send_client": false, 00:20:10.196 "zerocopy_threshold": 0, 00:20:10.196 "tls_version": 0, 00:20:10.196 "enable_ktls": false 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "sock_impl_set_options", 00:20:10.196 "params": { 00:20:10.196 "impl_name": "posix", 00:20:10.196 "recv_buf_size": 2097152, 00:20:10.196 "send_buf_size": 2097152, 00:20:10.196 "enable_recv_pipe": true, 00:20:10.196 "enable_quickack": false, 00:20:10.196 "enable_placement_id": 0, 00:20:10.196 "enable_zerocopy_send_server": true, 00:20:10.196 "enable_zerocopy_send_client": false, 00:20:10.196 "zerocopy_threshold": 0, 00:20:10.196 "tls_version": 0, 00:20:10.196 "enable_ktls": false 00:20:10.196 } 00:20:10.196 } 00:20:10.196 ] 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "subsystem": "vmd", 00:20:10.196 "config": [] 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "subsystem": "accel", 00:20:10.196 "config": [ 00:20:10.196 { 00:20:10.196 "method": "accel_set_options", 00:20:10.196 "params": { 00:20:10.196 "small_cache_size": 128, 00:20:10.196 "large_cache_size": 16, 00:20:10.196 "task_count": 2048, 00:20:10.196 "sequence_count": 2048, 00:20:10.196 "buf_count": 2048 00:20:10.196 } 00:20:10.196 } 00:20:10.196 ] 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "subsystem": "bdev", 00:20:10.196 "config": [ 00:20:10.196 { 00:20:10.196 "method": "bdev_set_options", 00:20:10.196 "params": { 00:20:10.196 "bdev_io_pool_size": 65535, 00:20:10.196 "bdev_io_cache_size": 256, 00:20:10.196 "bdev_auto_examine": true, 00:20:10.196 "iobuf_small_cache_size": 128, 00:20:10.196 "iobuf_large_cache_size": 16 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "bdev_raid_set_options", 00:20:10.196 "params": { 00:20:10.196 "process_window_size_kb": 1024, 00:20:10.196 "process_max_bandwidth_mb_sec": 0 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "bdev_iscsi_set_options", 00:20:10.196 "params": { 00:20:10.196 "timeout_sec": 30 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "bdev_nvme_set_options", 00:20:10.196 "params": { 00:20:10.196 "action_on_timeout": "none", 00:20:10.196 "timeout_us": 0, 00:20:10.196 "timeout_admin_us": 0, 00:20:10.196 "keep_alive_timeout_ms": 10000, 00:20:10.196 "arbitration_burst": 0, 00:20:10.196 "low_priority_weight": 0, 00:20:10.196 "medium_priority_weight": 0, 00:20:10.196 "high_priority_weight": 0, 00:20:10.196 "nvme_adminq_poll_period_us": 10000, 00:20:10.196 "nvme_ioq_poll_period_us": 0, 00:20:10.196 "io_queue_requests": 0, 00:20:10.196 "delay_cmd_submit": true, 00:20:10.196 "transport_retry_count": 4, 00:20:10.196 "bdev_retry_count": 3, 00:20:10.196 "transport_ack_timeout": 0, 00:20:10.196 "ctrlr_loss_timeout_sec": 0, 00:20:10.196 "reconnect_delay_sec": 0, 00:20:10.196 "fast_io_fail_timeout_sec": 0, 00:20:10.196 "disable_auto_failback": false, 00:20:10.196 "generate_uuids": false, 00:20:10.196 "transport_tos": 0, 00:20:10.196 "nvme_error_stat": false, 00:20:10.196 "rdma_srq_size": 0, 00:20:10.196 "io_path_stat": false, 00:20:10.196 "allow_accel_sequence": false, 00:20:10.196 "rdma_max_cq_size": 0, 00:20:10.196 "rdma_cm_event_timeout_ms": 0, 00:20:10.196 "dhchap_digests": [ 00:20:10.196 "sha256", 00:20:10.196 "sha384", 00:20:10.196 "sha512" 00:20:10.196 ], 00:20:10.196 "dhchap_dhgroups": [ 00:20:10.196 "null", 00:20:10.196 "ffdhe2048", 00:20:10.196 "ffdhe3072", 00:20:10.196 "ffdhe4096", 00:20:10.196 "ffdhe6144", 00:20:10.196 "ffdhe8192" 00:20:10.196 ] 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "bdev_nvme_set_hotplug", 00:20:10.196 "params": { 00:20:10.196 "period_us": 100000, 00:20:10.196 "enable": false 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "bdev_malloc_create", 00:20:10.196 "params": { 00:20:10.196 "name": "malloc0", 00:20:10.196 "num_blocks": 8192, 00:20:10.196 "block_size": 4096, 00:20:10.196 "physical_block_size": 4096, 00:20:10.196 "uuid": "0c3fee8c-b02e-4e41-9bb3-3e240e31d8ae", 00:20:10.196 "optimal_io_boundary": 0, 00:20:10.196 "md_size": 0, 00:20:10.196 "dif_type": 0, 00:20:10.196 "dif_is_head_of_md": false, 00:20:10.196 "dif_pi_format": 0 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "bdev_wait_for_examine" 00:20:10.196 } 00:20:10.196 ] 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "subsystem": "nbd", 00:20:10.196 "config": [] 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "subsystem": "scheduler", 00:20:10.196 "config": [ 00:20:10.196 { 00:20:10.196 "method": "framework_set_scheduler", 00:20:10.196 "params": { 00:20:10.196 "name": "static" 00:20:10.196 } 00:20:10.196 } 00:20:10.196 ] 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "subsystem": "nvmf", 00:20:10.196 "config": [ 00:20:10.196 { 00:20:10.196 "method": "nvmf_set_config", 00:20:10.196 "params": { 00:20:10.196 "discovery_filter": "match_any", 00:20:10.196 "admin_cmd_passthru": { 00:20:10.196 "identify_ctrlr": false 00:20:10.196 }, 00:20:10.196 "dhchap_digests": [ 00:20:10.196 "sha256", 00:20:10.196 "sha384", 00:20:10.196 "sha512" 00:20:10.196 ], 00:20:10.196 "dhchap_dhgroups": [ 00:20:10.196 "null", 00:20:10.196 "ffdhe2048", 00:20:10.196 "ffdhe3072", 00:20:10.196 "ffdhe4096", 00:20:10.196 "ffdhe6144", 00:20:10.196 "ffdhe8192" 00:20:10.196 ] 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "nvmf_set_max_subsystems", 00:20:10.196 "params": { 00:20:10.196 "max_subsystems": 1024 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "nvmf_set_crdt", 00:20:10.196 "params": { 00:20:10.196 "crdt1": 0, 00:20:10.196 "crdt2": 0, 00:20:10.196 "crdt3": 0 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "nvmf_create_transport", 00:20:10.196 "params": { 00:20:10.196 "trtype": "TCP", 00:20:10.196 "max_queue_depth": 128, 00:20:10.196 "max_io_qpairs_per_ctrlr": 127, 00:20:10.196 "in_capsule_data_size": 4096, 00:20:10.196 "max_io_size": 131072, 00:20:10.196 "io_unit_size": 131072, 00:20:10.196 "max_aq_depth": 128, 00:20:10.196 "num_shared_buffers": 511, 00:20:10.196 "buf_cache_size": 4294967295, 00:20:10.196 "dif_insert_or_strip": false, 00:20:10.196 "zcopy": false, 00:20:10.196 "c2h_success": false, 00:20:10.196 "sock_priority": 0, 00:20:10.196 "abort_timeout_sec": 1, 00:20:10.196 "ack_timeout": 0, 00:20:10.196 "data_wr_pool_size": 0 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "nvmf_create_subsystem", 00:20:10.196 "params": { 00:20:10.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.196 "allow_any_host": false, 00:20:10.196 "serial_number": "00000000000000000000", 00:20:10.196 "model_number": "SPDK bdev Controller", 00:20:10.196 "max_namespaces": 32, 00:20:10.196 "min_cntlid": 1, 00:20:10.196 "max_cntlid": 65519, 00:20:10.196 "ana_reporting": false 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "nvmf_subsystem_add_host", 00:20:10.196 "params": { 00:20:10.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.196 "host": "nqn.2016-06.io.spdk:host1", 00:20:10.196 "psk": "key0" 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "nvmf_subsystem_add_ns", 00:20:10.196 "params": { 00:20:10.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.196 "namespace": { 00:20:10.196 "nsid": 1, 00:20:10.196 "bdev_name": "malloc0", 00:20:10.196 "nguid": "0C3FEE8CB02E4E419BB33E240E31D8AE", 00:20:10.196 "uuid": "0c3fee8c-b02e-4e41-9bb3-3e240e31d8ae", 00:20:10.196 "no_auto_visible": false 00:20:10.196 } 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "nvmf_subsystem_add_listener", 00:20:10.196 "params": { 00:20:10.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.196 "listen_address": { 00:20:10.196 "trtype": "TCP", 00:20:10.197 "adrfam": "IPv4", 00:20:10.197 "traddr": "10.0.0.2", 00:20:10.197 "trsvcid": "4420" 00:20:10.197 }, 00:20:10.197 "secure_channel": false, 00:20:10.197 "sock_impl": "ssl" 00:20:10.197 } 00:20:10.197 } 00:20:10.197 ] 00:20:10.197 } 00:20:10.197 ] 00:20:10.197 }' 00:20:10.197 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:10.457 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:10.457 "subsystems": [ 00:20:10.457 { 00:20:10.457 "subsystem": "keyring", 00:20:10.457 "config": [ 00:20:10.457 { 00:20:10.457 "method": "keyring_file_add_key", 00:20:10.457 "params": { 00:20:10.457 "name": "key0", 00:20:10.457 "path": "/tmp/tmp.rBZjjuoquE" 00:20:10.457 } 00:20:10.457 } 00:20:10.457 ] 00:20:10.457 }, 00:20:10.458 { 00:20:10.458 "subsystem": "iobuf", 00:20:10.458 "config": [ 00:20:10.458 { 00:20:10.458 "method": "iobuf_set_options", 00:20:10.458 "params": { 00:20:10.458 "small_pool_count": 8192, 00:20:10.458 "large_pool_count": 1024, 00:20:10.458 "small_bufsize": 8192, 00:20:10.458 "large_bufsize": 135168, 00:20:10.458 "enable_numa": false 00:20:10.458 } 00:20:10.458 } 00:20:10.458 ] 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "subsystem": "sock", 00:20:10.458 "config": [ 00:20:10.458 { 00:20:10.458 "method": "sock_set_default_impl", 00:20:10.458 "params": { 00:20:10.458 "impl_name": "posix" 00:20:10.458 } 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "method": "sock_impl_set_options", 00:20:10.458 "params": { 00:20:10.458 "impl_name": "ssl", 00:20:10.458 "recv_buf_size": 4096, 00:20:10.458 "send_buf_size": 4096, 00:20:10.458 "enable_recv_pipe": true, 00:20:10.458 "enable_quickack": false, 00:20:10.458 "enable_placement_id": 0, 00:20:10.458 "enable_zerocopy_send_server": true, 00:20:10.458 "enable_zerocopy_send_client": false, 00:20:10.458 "zerocopy_threshold": 0, 00:20:10.458 "tls_version": 0, 00:20:10.458 "enable_ktls": false 00:20:10.458 } 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "method": "sock_impl_set_options", 00:20:10.458 "params": { 00:20:10.458 "impl_name": "posix", 00:20:10.458 "recv_buf_size": 2097152, 00:20:10.458 "send_buf_size": 2097152, 00:20:10.458 "enable_recv_pipe": true, 00:20:10.458 "enable_quickack": false, 00:20:10.458 "enable_placement_id": 0, 00:20:10.458 "enable_zerocopy_send_server": true, 00:20:10.458 "enable_zerocopy_send_client": false, 00:20:10.458 "zerocopy_threshold": 0, 00:20:10.458 "tls_version": 0, 00:20:10.458 "enable_ktls": false 00:20:10.458 } 00:20:10.458 } 00:20:10.458 ] 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "subsystem": "vmd", 00:20:10.458 "config": [] 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "subsystem": "accel", 00:20:10.458 "config": [ 00:20:10.458 { 00:20:10.458 "method": "accel_set_options", 00:20:10.458 "params": { 00:20:10.458 "small_cache_size": 128, 00:20:10.458 "large_cache_size": 16, 00:20:10.458 "task_count": 2048, 00:20:10.458 "sequence_count": 2048, 00:20:10.458 "buf_count": 2048 00:20:10.458 } 00:20:10.458 } 00:20:10.458 ] 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "subsystem": "bdev", 00:20:10.458 "config": [ 00:20:10.458 { 00:20:10.458 "method": "bdev_set_options", 00:20:10.458 "params": { 00:20:10.458 "bdev_io_pool_size": 65535, 00:20:10.458 "bdev_io_cache_size": 256, 00:20:10.458 "bdev_auto_examine": true, 00:20:10.458 "iobuf_small_cache_size": 128, 00:20:10.458 "iobuf_large_cache_size": 16 00:20:10.458 } 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "method": "bdev_raid_set_options", 00:20:10.458 "params": { 00:20:10.458 "process_window_size_kb": 1024, 00:20:10.458 "process_max_bandwidth_mb_sec": 0 00:20:10.458 } 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "method": "bdev_iscsi_set_options", 00:20:10.458 "params": { 00:20:10.458 "timeout_sec": 30 00:20:10.458 } 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "method": "bdev_nvme_set_options", 00:20:10.458 "params": { 00:20:10.458 "action_on_timeout": "none", 00:20:10.458 "timeout_us": 0, 00:20:10.458 "timeout_admin_us": 0, 00:20:10.458 "keep_alive_timeout_ms": 10000, 00:20:10.458 "arbitration_burst": 0, 00:20:10.458 "low_priority_weight": 0, 00:20:10.458 "medium_priority_weight": 0, 00:20:10.458 "high_priority_weight": 0, 00:20:10.458 "nvme_adminq_poll_period_us": 10000, 00:20:10.458 "nvme_ioq_poll_period_us": 0, 00:20:10.458 "io_queue_requests": 512, 00:20:10.458 "delay_cmd_submit": true, 00:20:10.458 "transport_retry_count": 4, 00:20:10.458 "bdev_retry_count": 3, 00:20:10.458 "transport_ack_timeout": 0, 00:20:10.458 "ctrlr_loss_timeout_sec": 0, 00:20:10.458 "reconnect_delay_sec": 0, 00:20:10.458 "fast_io_fail_timeout_sec": 0, 00:20:10.458 "disable_auto_failback": false, 00:20:10.458 "generate_uuids": false, 00:20:10.458 "transport_tos": 0, 00:20:10.458 "nvme_error_stat": false, 00:20:10.458 "rdma_srq_size": 0, 00:20:10.458 "io_path_stat": false, 00:20:10.458 "allow_accel_sequence": false, 00:20:10.458 "rdma_max_cq_size": 0, 00:20:10.458 "rdma_cm_event_timeout_ms": 0, 00:20:10.458 "dhchap_digests": [ 00:20:10.458 "sha256", 00:20:10.458 "sha384", 00:20:10.458 "sha512" 00:20:10.458 ], 00:20:10.458 "dhchap_dhgroups": [ 00:20:10.458 "null", 00:20:10.458 "ffdhe2048", 00:20:10.458 "ffdhe3072", 00:20:10.458 "ffdhe4096", 00:20:10.458 "ffdhe6144", 00:20:10.458 "ffdhe8192" 00:20:10.458 ] 00:20:10.458 } 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "method": "bdev_nvme_attach_controller", 00:20:10.458 "params": { 00:20:10.458 "name": "nvme0", 00:20:10.458 "trtype": "TCP", 00:20:10.458 "adrfam": "IPv4", 00:20:10.458 "traddr": "10.0.0.2", 00:20:10.458 "trsvcid": "4420", 00:20:10.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.458 "prchk_reftag": false, 00:20:10.458 "prchk_guard": false, 00:20:10.458 "ctrlr_loss_timeout_sec": 0, 00:20:10.458 "reconnect_delay_sec": 0, 00:20:10.458 "fast_io_fail_timeout_sec": 0, 00:20:10.458 "psk": "key0", 00:20:10.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.458 "hdgst": false, 00:20:10.458 "ddgst": false, 00:20:10.458 "multipath": "multipath" 00:20:10.458 } 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "method": "bdev_nvme_set_hotplug", 00:20:10.458 "params": { 00:20:10.458 "period_us": 100000, 00:20:10.458 "enable": false 00:20:10.458 } 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "method": "bdev_enable_histogram", 00:20:10.458 "params": { 00:20:10.458 "name": "nvme0n1", 00:20:10.458 "enable": true 00:20:10.458 } 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "method": "bdev_wait_for_examine" 00:20:10.458 } 00:20:10.458 ] 00:20:10.458 }, 00:20:10.458 { 00:20:10.458 "subsystem": "nbd", 00:20:10.458 "config": [] 00:20:10.458 } 00:20:10.458 ] 00:20:10.458 }' 00:20:10.458 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 717202 00:20:10.458 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 717202 ']' 00:20:10.458 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 717202 00:20:10.458 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.458 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.458 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 717202 00:20:10.458 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.458 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.458 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 717202' 00:20:10.458 killing process with pid 717202 00:20:10.458 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 717202 00:20:10.458 Received shutdown signal, test time was about 1.000000 seconds 00:20:10.458 00:20:10.458 Latency(us) 00:20:10.458 [2024-11-20T08:05:35.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.458 [2024-11-20T08:05:35.987Z] =================================================================================================================== 00:20:10.458 [2024-11-20T08:05:35.987Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.458 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 717202 00:20:10.459 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 716856 00:20:10.459 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 716856 ']' 00:20:10.459 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 716856 00:20:10.459 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.459 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.459 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 716856 00:20:10.721 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.721 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.721 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 716856' 00:20:10.721 killing process with pid 716856 00:20:10.721 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 716856 00:20:10.721 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 716856 00:20:10.721 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:10.721 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.721 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.721 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:10.721 "subsystems": [ 00:20:10.721 { 00:20:10.721 "subsystem": "keyring", 00:20:10.721 "config": [ 00:20:10.721 { 00:20:10.721 "method": "keyring_file_add_key", 00:20:10.721 "params": { 00:20:10.721 "name": "key0", 00:20:10.721 "path": "/tmp/tmp.rBZjjuoquE" 00:20:10.721 } 00:20:10.721 } 00:20:10.721 ] 00:20:10.721 }, 00:20:10.721 { 00:20:10.721 "subsystem": "iobuf", 00:20:10.721 "config": [ 00:20:10.721 { 00:20:10.721 "method": "iobuf_set_options", 00:20:10.721 "params": { 00:20:10.721 "small_pool_count": 8192, 00:20:10.721 "large_pool_count": 1024, 00:20:10.721 "small_bufsize": 8192, 00:20:10.721 "large_bufsize": 135168, 00:20:10.721 "enable_numa": false 00:20:10.721 } 00:20:10.721 } 00:20:10.721 ] 00:20:10.721 }, 00:20:10.721 { 00:20:10.721 "subsystem": "sock", 00:20:10.721 "config": [ 00:20:10.721 { 00:20:10.721 "method": "sock_set_default_impl", 00:20:10.721 "params": { 00:20:10.721 "impl_name": "posix" 00:20:10.721 } 00:20:10.721 }, 00:20:10.721 { 00:20:10.721 "method": "sock_impl_set_options", 00:20:10.721 "params": { 00:20:10.721 "impl_name": "ssl", 00:20:10.721 "recv_buf_size": 4096, 00:20:10.721 "send_buf_size": 4096, 00:20:10.721 "enable_recv_pipe": true, 00:20:10.721 "enable_quickack": false, 00:20:10.721 "enable_placement_id": 0, 00:20:10.721 "enable_zerocopy_send_server": true, 00:20:10.721 "enable_zerocopy_send_client": false, 00:20:10.721 "zerocopy_threshold": 0, 00:20:10.721 "tls_version": 0, 00:20:10.721 "enable_ktls": false 00:20:10.721 } 00:20:10.721 }, 00:20:10.721 { 00:20:10.721 "method": "sock_impl_set_options", 00:20:10.721 "params": { 00:20:10.721 "impl_name": "posix", 00:20:10.721 "recv_buf_size": 2097152, 00:20:10.721 "send_buf_size": 2097152, 00:20:10.721 "enable_recv_pipe": true, 00:20:10.721 "enable_quickack": false, 00:20:10.721 "enable_placement_id": 0, 00:20:10.721 "enable_zerocopy_send_server": true, 00:20:10.721 "enable_zerocopy_send_client": false, 00:20:10.721 "zerocopy_threshold": 0, 00:20:10.721 "tls_version": 0, 00:20:10.721 "enable_ktls": false 00:20:10.721 } 00:20:10.721 } 00:20:10.721 ] 00:20:10.721 }, 00:20:10.721 { 00:20:10.721 "subsystem": "vmd", 00:20:10.721 "config": [] 00:20:10.721 }, 00:20:10.721 { 00:20:10.721 "subsystem": "accel", 00:20:10.721 "config": [ 00:20:10.721 { 00:20:10.721 "method": "accel_set_options", 00:20:10.721 "params": { 00:20:10.721 "small_cache_size": 128, 00:20:10.721 "large_cache_size": 16, 00:20:10.721 "task_count": 2048, 00:20:10.721 "sequence_count": 2048, 00:20:10.721 "buf_count": 2048 00:20:10.721 } 00:20:10.721 } 00:20:10.721 ] 00:20:10.721 }, 00:20:10.721 { 00:20:10.721 "subsystem": "bdev", 00:20:10.721 "config": [ 00:20:10.721 { 00:20:10.721 "method": "bdev_set_options", 00:20:10.721 "params": { 00:20:10.721 "bdev_io_pool_size": 65535, 00:20:10.721 "bdev_io_cache_size": 256, 00:20:10.721 "bdev_auto_examine": true, 00:20:10.721 "iobuf_small_cache_size": 128, 00:20:10.721 "iobuf_large_cache_size": 16 00:20:10.721 } 00:20:10.721 }, 00:20:10.721 { 00:20:10.721 "method": "bdev_raid_set_options", 00:20:10.721 "params": { 00:20:10.721 "process_window_size_kb": 1024, 00:20:10.721 "process_max_bandwidth_mb_sec": 0 00:20:10.721 } 00:20:10.721 }, 00:20:10.721 { 00:20:10.721 "method": "bdev_iscsi_set_options", 00:20:10.721 "params": { 00:20:10.721 "timeout_sec": 30 00:20:10.721 } 00:20:10.721 }, 00:20:10.721 { 00:20:10.721 "method": "bdev_nvme_set_options", 00:20:10.721 "params": { 00:20:10.721 "action_on_timeout": "none", 00:20:10.722 "timeout_us": 0, 00:20:10.722 "timeout_admin_us": 0, 00:20:10.722 "keep_alive_timeout_ms": 10000, 00:20:10.722 "arbitration_burst": 0, 00:20:10.722 "low_priority_weight": 0, 00:20:10.722 "medium_priority_weight": 0, 00:20:10.722 "high_priority_weight": 0, 00:20:10.722 "nvme_adminq_poll_period_us": 10000, 00:20:10.722 "nvme_ioq_poll_period_us": 0, 00:20:10.722 "io_queue_requests": 0, 00:20:10.722 "delay_cmd_submit": true, 00:20:10.722 "transport_retry_count": 4, 00:20:10.722 "bdev_retry_count": 3, 00:20:10.722 "transport_ack_timeout": 0, 00:20:10.722 "ctrlr_loss_timeout_sec": 0, 00:20:10.722 "reconnect_delay_sec": 0, 00:20:10.722 "fast_io_fail_timeout_sec": 0, 00:20:10.722 "disable_auto_failback": false, 00:20:10.722 "generate_uuids": false, 00:20:10.722 "transport_tos": 0, 00:20:10.722 "nvme_error_stat": false, 00:20:10.722 "rdma_srq_size": 0, 00:20:10.722 "io_path_stat": false, 00:20:10.722 "allow_accel_sequence": false, 00:20:10.722 "rdma_max_cq_size": 0, 00:20:10.722 "rdma_cm_event_timeout_ms": 0, 00:20:10.722 "dhchap_digests": [ 00:20:10.722 "sha256", 00:20:10.722 "sha384", 00:20:10.722 "sha512" 00:20:10.722 ], 00:20:10.722 "dhchap_dhgroups": [ 00:20:10.722 "null", 00:20:10.722 "ffdhe2048", 00:20:10.722 "ffdhe3072", 00:20:10.722 "ffdhe4096", 00:20:10.722 "ffdhe6144", 00:20:10.722 "ffdhe8192" 00:20:10.722 ] 00:20:10.722 } 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "method": "bdev_nvme_set_hotplug", 00:20:10.722 "params": { 00:20:10.722 "period_us": 100000, 00:20:10.722 "enable": false 00:20:10.722 } 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "method": "bdev_malloc_create", 00:20:10.722 "params": { 00:20:10.722 "name": "malloc0", 00:20:10.722 "num_blocks": 8192, 00:20:10.722 "block_size": 4096, 00:20:10.722 "physical_block_size": 4096, 00:20:10.722 "uuid": "0c3fee8c-b02e-4e41-9bb3-3e240e31d8ae", 00:20:10.722 "optimal_io_boundary": 0, 00:20:10.722 "md_size": 0, 00:20:10.722 "dif_type": 0, 00:20:10.722 "dif_is_head_of_md": false, 00:20:10.722 "dif_pi_format": 0 00:20:10.722 } 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "method": "bdev_wait_for_examine" 00:20:10.722 } 00:20:10.722 ] 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "subsystem": "nbd", 00:20:10.722 "config": [] 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "subsystem": "scheduler", 00:20:10.722 "config": [ 00:20:10.722 { 00:20:10.722 "method": "framework_set_scheduler", 00:20:10.722 "params": { 00:20:10.722 "name": "static" 00:20:10.722 } 00:20:10.722 } 00:20:10.722 ] 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "subsystem": "nvmf", 00:20:10.722 "config": [ 00:20:10.722 { 00:20:10.722 "method": "nvmf_set_config", 00:20:10.722 "params": { 00:20:10.722 "discovery_filter": "match_any", 00:20:10.722 "admin_cmd_passthru": { 00:20:10.722 "identify_ctrlr": false 00:20:10.722 }, 00:20:10.722 "dhchap_digests": [ 00:20:10.722 "sha256", 00:20:10.722 "sha384", 00:20:10.722 "sha512" 00:20:10.722 ], 00:20:10.722 "dhchap_dhgroups": [ 00:20:10.722 "null", 00:20:10.722 "ffdhe2048", 00:20:10.722 "ffdhe3072", 00:20:10.722 "ffdhe4096", 00:20:10.722 "ffdhe6144", 00:20:10.722 "ffdhe8192" 00:20:10.722 ] 00:20:10.722 } 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "method": "nvmf_set_max_subsystems", 00:20:10.722 "params": { 00:20:10.722 "max_subsystems": 1024 00:20:10.722 } 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "method": "nvmf_set_crdt", 00:20:10.722 "params": { 00:20:10.722 "crdt1": 0, 00:20:10.722 "crdt2": 0, 00:20:10.722 "crdt3": 0 00:20:10.722 } 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "method": "nvmf_create_transport", 00:20:10.722 "params": { 00:20:10.722 "trtype": "TCP", 00:20:10.722 "max_queue_depth": 128, 00:20:10.722 "max_io_qpairs_per_ctrlr": 127, 00:20:10.722 "in_capsule_data_size": 4096, 00:20:10.722 "max_io_size": 131072, 00:20:10.722 "io_unit_size": 131072, 00:20:10.722 "max_aq_depth": 128, 00:20:10.722 "num_shared_buffers": 511, 00:20:10.722 "buf_cache_size": 4294967295, 00:20:10.722 "dif_insert_or_strip": false, 00:20:10.722 "zcopy": false, 00:20:10.722 "c2h_success": false, 00:20:10.722 "sock_priority": 0, 00:20:10.722 "abort_timeout_sec": 1, 00:20:10.722 "ack_timeout": 0, 00:20:10.722 "data_wr_pool_size": 0 00:20:10.722 } 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "method": "nvmf_create_subsystem", 00:20:10.722 "params": { 00:20:10.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.722 "allow_any_host": false, 00:20:10.722 "serial_number": "00000000000000000000", 00:20:10.722 "model_number": "SPDK bdev Controller", 00:20:10.722 "max_namespaces": 32, 00:20:10.722 "min_cntlid": 1, 00:20:10.722 "max_cntlid": 65519, 00:20:10.722 "ana_reporting": false 00:20:10.722 } 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "method": "nvmf_subsystem_add_host", 00:20:10.722 "params": { 00:20:10.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.722 "host": "nqn.2016-06.io.spdk:host1", 00:20:10.722 "psk": "key0" 00:20:10.722 } 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "method": "nvmf_subsystem_add_ns", 00:20:10.722 "params": { 00:20:10.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.722 "namespace": { 00:20:10.722 "nsid": 1, 00:20:10.722 "bdev_name": "malloc0", 00:20:10.722 "nguid": "0C3FEE8CB02E4E419BB33E240E31D8AE", 00:20:10.722 "uuid": "0c3fee8c-b02e-4e41-9bb3-3e240e31d8ae", 00:20:10.722 "no_auto_visible": false 00:20:10.722 } 00:20:10.722 } 00:20:10.722 }, 00:20:10.722 { 00:20:10.722 "method": "nvmf_subsystem_add_listener", 00:20:10.722 "params": { 00:20:10.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.722 "listen_address": { 00:20:10.722 "trtype": "TCP", 00:20:10.722 "adrfam": "IPv4", 00:20:10.722 "traddr": "10.0.0.2", 00:20:10.722 "trsvcid": "4420" 00:20:10.722 }, 00:20:10.722 "secure_channel": false, 00:20:10.722 "sock_impl": "ssl" 00:20:10.722 } 00:20:10.722 } 00:20:10.722 ] 00:20:10.722 } 00:20:10.722 ] 00:20:10.722 }' 00:20:10.722 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.722 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=717756 00:20:10.722 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 717756 00:20:10.722 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:10.722 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 717756 ']' 00:20:10.722 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.722 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.722 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.722 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.722 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.722 [2024-11-20 09:05:36.220990] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:20:10.722 [2024-11-20 09:05:36.221055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.983 [2024-11-20 09:05:36.311571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.983 [2024-11-20 09:05:36.341516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.983 [2024-11-20 09:05:36.341545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.983 [2024-11-20 09:05:36.341551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.983 [2024-11-20 09:05:36.341556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.983 [2024-11-20 09:05:36.341560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.983 [2024-11-20 09:05:36.342054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.242 [2024-11-20 09:05:36.535196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.242 [2024-11-20 09:05:36.567230] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:11.242 [2024-11-20 09:05:36.567425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.502 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.502 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:11.502 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.502 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.502 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.763 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.763 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=717915 00:20:11.763 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 717915 /var/tmp/bdevperf.sock 00:20:11.763 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 717915 ']' 00:20:11.763 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.763 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.763 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.763 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:11.763 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.763 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.763 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:11.763 "subsystems": [ 00:20:11.763 { 00:20:11.763 "subsystem": "keyring", 00:20:11.763 "config": [ 00:20:11.763 { 00:20:11.763 "method": "keyring_file_add_key", 00:20:11.763 "params": { 00:20:11.763 "name": "key0", 00:20:11.763 "path": "/tmp/tmp.rBZjjuoquE" 00:20:11.763 } 00:20:11.763 } 00:20:11.763 ] 00:20:11.763 }, 00:20:11.763 { 00:20:11.763 "subsystem": "iobuf", 00:20:11.763 "config": [ 00:20:11.763 { 00:20:11.763 "method": "iobuf_set_options", 00:20:11.763 "params": { 00:20:11.763 "small_pool_count": 8192, 00:20:11.763 "large_pool_count": 1024, 00:20:11.763 "small_bufsize": 8192, 00:20:11.763 "large_bufsize": 135168, 00:20:11.763 "enable_numa": false 00:20:11.763 } 00:20:11.763 } 00:20:11.763 ] 00:20:11.763 }, 00:20:11.763 { 00:20:11.763 "subsystem": "sock", 00:20:11.763 "config": [ 00:20:11.763 { 00:20:11.763 "method": "sock_set_default_impl", 00:20:11.763 "params": { 00:20:11.763 "impl_name": "posix" 00:20:11.763 } 00:20:11.763 }, 00:20:11.763 { 00:20:11.763 "method": "sock_impl_set_options", 00:20:11.763 "params": { 00:20:11.763 "impl_name": "ssl", 00:20:11.763 "recv_buf_size": 4096, 00:20:11.763 "send_buf_size": 4096, 00:20:11.763 "enable_recv_pipe": true, 00:20:11.763 "enable_quickack": false, 00:20:11.763 "enable_placement_id": 0, 00:20:11.763 "enable_zerocopy_send_server": true, 00:20:11.763 "enable_zerocopy_send_client": false, 00:20:11.763 "zerocopy_threshold": 0, 00:20:11.763 "tls_version": 0, 00:20:11.763 "enable_ktls": false 00:20:11.763 } 00:20:11.763 }, 00:20:11.763 { 00:20:11.763 "method": "sock_impl_set_options", 00:20:11.763 "params": { 00:20:11.764 "impl_name": "posix", 00:20:11.764 "recv_buf_size": 2097152, 00:20:11.764 "send_buf_size": 2097152, 00:20:11.764 "enable_recv_pipe": true, 00:20:11.764 "enable_quickack": false, 00:20:11.764 "enable_placement_id": 0, 00:20:11.764 "enable_zerocopy_send_server": true, 00:20:11.764 "enable_zerocopy_send_client": false, 00:20:11.764 "zerocopy_threshold": 0, 00:20:11.764 "tls_version": 0, 00:20:11.764 "enable_ktls": false 00:20:11.764 } 00:20:11.764 } 00:20:11.764 ] 00:20:11.764 }, 00:20:11.764 { 00:20:11.764 "subsystem": "vmd", 00:20:11.764 "config": [] 00:20:11.764 }, 00:20:11.764 { 00:20:11.764 "subsystem": "accel", 00:20:11.764 "config": [ 00:20:11.764 { 00:20:11.764 "method": "accel_set_options", 00:20:11.764 "params": { 00:20:11.764 "small_cache_size": 128, 00:20:11.764 "large_cache_size": 16, 00:20:11.764 "task_count": 2048, 00:20:11.764 "sequence_count": 2048, 00:20:11.764 "buf_count": 2048 00:20:11.764 } 00:20:11.764 } 00:20:11.764 ] 00:20:11.764 }, 00:20:11.764 { 00:20:11.764 "subsystem": "bdev", 00:20:11.764 "config": [ 00:20:11.764 { 00:20:11.764 "method": "bdev_set_options", 00:20:11.764 "params": { 00:20:11.764 "bdev_io_pool_size": 65535, 00:20:11.764 "bdev_io_cache_size": 256, 00:20:11.764 "bdev_auto_examine": true, 00:20:11.764 "iobuf_small_cache_size": 128, 00:20:11.764 "iobuf_large_cache_size": 16 00:20:11.764 } 00:20:11.764 }, 00:20:11.764 { 00:20:11.764 "method": "bdev_raid_set_options", 00:20:11.764 "params": { 00:20:11.764 "process_window_size_kb": 1024, 00:20:11.764 "process_max_bandwidth_mb_sec": 0 00:20:11.764 } 00:20:11.764 }, 00:20:11.764 { 00:20:11.764 "method": "bdev_iscsi_set_options", 00:20:11.764 "params": { 00:20:11.764 "timeout_sec": 30 00:20:11.764 } 00:20:11.764 }, 00:20:11.764 { 00:20:11.764 "method": "bdev_nvme_set_options", 00:20:11.764 "params": { 00:20:11.764 "action_on_timeout": "none", 00:20:11.764 "timeout_us": 0, 00:20:11.764 "timeout_admin_us": 0, 00:20:11.764 "keep_alive_timeout_ms": 10000, 00:20:11.764 "arbitration_burst": 0, 00:20:11.764 "low_priority_weight": 0, 00:20:11.764 "medium_priority_weight": 0, 00:20:11.764 "high_priority_weight": 0, 00:20:11.764 "nvme_adminq_poll_period_us": 10000, 00:20:11.764 "nvme_ioq_poll_period_us": 0, 00:20:11.764 "io_queue_requests": 512, 00:20:11.764 "delay_cmd_submit": true, 00:20:11.764 "transport_retry_count": 4, 00:20:11.764 "bdev_retry_count": 3, 00:20:11.764 "transport_ack_timeout": 0, 00:20:11.764 "ctrlr_loss_timeout_sec": 0, 00:20:11.764 "reconnect_delay_sec": 0, 00:20:11.764 "fast_io_fail_timeout_sec": 0, 00:20:11.764 "disable_auto_failback": false, 00:20:11.764 "generate_uuids": false, 00:20:11.764 "transport_tos": 0, 00:20:11.764 "nvme_error_stat": false, 00:20:11.764 "rdma_srq_size": 0, 00:20:11.764 "io_path_stat": false, 00:20:11.764 "allow_accel_sequence": false, 00:20:11.764 "rdma_max_cq_size": 0, 00:20:11.764 "rdma_cm_event_timeout_ms": 0, 00:20:11.764 "dhchap_digests": [ 00:20:11.764 "sha256", 00:20:11.764 "sha384", 00:20:11.764 "sha512" 00:20:11.764 ], 00:20:11.764 "dhchap_dhgroups": [ 00:20:11.764 "null", 00:20:11.764 "ffdhe2048", 00:20:11.764 "ffdhe3072", 00:20:11.764 "ffdhe4096", 00:20:11.764 "ffdhe6144", 00:20:11.764 "ffdhe8192" 00:20:11.764 ] 00:20:11.764 } 00:20:11.764 }, 00:20:11.764 { 00:20:11.764 "method": "bdev_nvme_attach_controller", 00:20:11.764 "params": { 00:20:11.764 "name": "nvme0", 00:20:11.764 "trtype": "TCP", 00:20:11.764 "adrfam": "IPv4", 00:20:11.764 "traddr": "10.0.0.2", 00:20:11.764 "trsvcid": "4420", 00:20:11.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.764 "prchk_reftag": false, 00:20:11.764 "prchk_guard": false, 00:20:11.764 "ctrlr_loss_timeout_sec": 0, 00:20:11.764 "reconnect_delay_sec": 0, 00:20:11.764 "fast_io_fail_timeout_sec": 0, 00:20:11.764 "psk": "key0", 00:20:11.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.764 "hdgst": false, 00:20:11.764 "ddgst": false, 00:20:11.764 "multipath": "multipath" 00:20:11.764 } 00:20:11.764 }, 00:20:11.764 { 00:20:11.764 "method": "bdev_nvme_set_hotplug", 00:20:11.764 "params": { 00:20:11.764 "period_us": 100000, 00:20:11.764 "enable": false 00:20:11.764 } 00:20:11.764 }, 00:20:11.764 { 00:20:11.764 "method": "bdev_enable_histogram", 00:20:11.764 "params": { 00:20:11.764 "name": "nvme0n1", 00:20:11.764 "enable": true 00:20:11.764 } 00:20:11.764 }, 00:20:11.764 { 00:20:11.764 "method": "bdev_wait_for_examine" 00:20:11.764 } 00:20:11.764 ] 00:20:11.764 }, 00:20:11.764 { 00:20:11.764 "subsystem": "nbd", 00:20:11.764 "config": [] 00:20:11.764 } 00:20:11.764 ] 00:20:11.764 }' 00:20:11.764 [2024-11-20 09:05:37.089327] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:20:11.764 [2024-11-20 09:05:37.089381] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid717915 ] 00:20:11.764 [2024-11-20 09:05:37.171198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.764 [2024-11-20 09:05:37.201031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.024 [2024-11-20 09:05:37.335904] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.594 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.594 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:12.594 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:12.594 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:12.594 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.594 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:12.855 Running I/O for 1 seconds... 00:20:13.796 5951.00 IOPS, 23.25 MiB/s 00:20:13.796 Latency(us) 00:20:13.796 [2024-11-20T08:05:39.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.796 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:13.796 Verification LBA range: start 0x0 length 0x2000 00:20:13.796 nvme0n1 : 1.03 5909.94 23.09 0.00 0.00 21394.90 4669.44 27962.03 00:20:13.796 [2024-11-20T08:05:39.325Z] =================================================================================================================== 00:20:13.796 [2024-11-20T08:05:39.325Z] Total : 5909.94 23.09 0.00 0.00 21394.90 4669.44 27962.03 00:20:13.796 { 00:20:13.796 "results": [ 00:20:13.796 { 00:20:13.796 "job": "nvme0n1", 00:20:13.796 "core_mask": "0x2", 00:20:13.796 "workload": "verify", 00:20:13.796 "status": "finished", 00:20:13.796 "verify_range": { 00:20:13.796 "start": 0, 00:20:13.796 "length": 8192 00:20:13.796 }, 00:20:13.796 "queue_depth": 128, 00:20:13.796 "io_size": 4096, 00:20:13.796 "runtime": 1.028606, 00:20:13.796 "iops": 5909.940249230512, 00:20:13.796 "mibps": 23.085704098556686, 00:20:13.796 "io_failed": 0, 00:20:13.796 "io_timeout": 0, 00:20:13.796 "avg_latency_us": 21394.90360037287, 00:20:13.796 "min_latency_us": 4669.44, 00:20:13.796 "max_latency_us": 27962.02666666667 00:20:13.796 } 00:20:13.796 ], 00:20:13.796 "core_count": 1 00:20:13.796 } 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:13.796 nvmf_trace.0 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 717915 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 717915 ']' 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 717915 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.796 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 717915 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 717915' 00:20:14.056 killing process with pid 717915 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 717915 00:20:14.056 Received shutdown signal, test time was about 1.000000 seconds 00:20:14.056 00:20:14.056 Latency(us) 00:20:14.056 [2024-11-20T08:05:39.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.056 [2024-11-20T08:05:39.585Z] =================================================================================================================== 00:20:14.056 [2024-11-20T08:05:39.585Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 717915 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:14.056 rmmod nvme_tcp 00:20:14.056 rmmod nvme_fabrics 00:20:14.056 rmmod nvme_keyring 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 717756 ']' 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 717756 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 717756 ']' 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 717756 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.056 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 717756 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 717756' 00:20:14.317 killing process with pid 717756 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 717756 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 717756 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.317 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.rHCUioZbm0 /tmp/tmp.W9NWir0pvC /tmp/tmp.rBZjjuoquE 00:20:16.863 00:20:16.863 real 1m28.857s 00:20:16.863 user 2m21.532s 00:20:16.863 sys 0m26.590s 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.863 ************************************ 00:20:16.863 END TEST nvmf_tls 00:20:16.863 ************************************ 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:16.863 ************************************ 00:20:16.863 START TEST nvmf_fips 00:20:16.863 ************************************ 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:16.863 * Looking for test storage... 00:20:16.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:16.863 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:16.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.863 --rc genhtml_branch_coverage=1 00:20:16.863 --rc genhtml_function_coverage=1 00:20:16.863 --rc genhtml_legend=1 00:20:16.863 --rc geninfo_all_blocks=1 00:20:16.863 --rc geninfo_unexecuted_blocks=1 00:20:16.863 00:20:16.863 ' 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:16.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.863 --rc genhtml_branch_coverage=1 00:20:16.863 --rc genhtml_function_coverage=1 00:20:16.863 --rc genhtml_legend=1 00:20:16.863 --rc geninfo_all_blocks=1 00:20:16.863 --rc geninfo_unexecuted_blocks=1 00:20:16.863 00:20:16.863 ' 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:16.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.863 --rc genhtml_branch_coverage=1 00:20:16.863 --rc genhtml_function_coverage=1 00:20:16.863 --rc genhtml_legend=1 00:20:16.863 --rc geninfo_all_blocks=1 00:20:16.863 --rc geninfo_unexecuted_blocks=1 00:20:16.863 00:20:16.863 ' 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:16.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.863 --rc genhtml_branch_coverage=1 00:20:16.863 --rc genhtml_function_coverage=1 00:20:16.863 --rc genhtml_legend=1 00:20:16.863 --rc geninfo_all_blocks=1 00:20:16.863 --rc geninfo_unexecuted_blocks=1 00:20:16.863 00:20:16.863 ' 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.863 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:16.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:16.864 Error setting digest 00:20:16.864 4032F45C1C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:16.864 4032F45C1C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:16.864 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:16.865 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:16.865 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.865 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.865 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.865 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:16.865 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:16.865 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.865 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:25.004 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:25.005 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:25.005 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:25.005 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:25.005 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:25.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:20:25.005 00:20:25.005 --- 10.0.0.2 ping statistics --- 00:20:25.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.005 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:20:25.005 00:20:25.005 --- 10.0.0.1 ping statistics --- 00:20:25.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.005 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:25.005 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=722636 00:20:25.006 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 722636 00:20:25.006 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:25.006 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 722636 ']' 00:20:25.006 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.006 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.006 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.006 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.006 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:25.006 [2024-11-20 09:05:49.953777] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:20:25.006 [2024-11-20 09:05:49.953853] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.006 [2024-11-20 09:05:50.056682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.006 [2024-11-20 09:05:50.107233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.006 [2024-11-20 09:05:50.107287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.006 [2024-11-20 09:05:50.107297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.006 [2024-11-20 09:05:50.107304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.006 [2024-11-20 09:05:50.107311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.006 [2024-11-20 09:05:50.108081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.267 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.267 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:25.267 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.267 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.267 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:25.529 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.529 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:25.529 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:25.529 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:25.529 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Yos 00:20:25.529 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:25.529 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Yos 00:20:25.529 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Yos 00:20:25.529 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Yos 00:20:25.529 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:25.529 [2024-11-20 09:05:50.985953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.529 [2024-11-20 09:05:51.001959] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.529 [2024-11-20 09:05:51.002285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.529 malloc0 00:20:25.791 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.791 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=722976 00:20:25.791 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 722976 /var/tmp/bdevperf.sock 00:20:25.791 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.791 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 722976 ']' 00:20:25.791 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.791 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.791 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.791 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.791 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:25.791 [2024-11-20 09:05:51.147841] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:20:25.791 [2024-11-20 09:05:51.147916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722976 ] 00:20:25.791 [2024-11-20 09:05:51.240198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.791 [2024-11-20 09:05:51.290797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.736 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.736 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:26.736 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Yos 00:20:26.736 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:26.996 [2024-11-20 09:05:52.327935] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.996 TLSTESTn1 00:20:26.996 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:27.256 Running I/O for 10 seconds... 00:20:29.137 6171.00 IOPS, 24.11 MiB/s [2024-11-20T08:05:55.607Z] 6249.50 IOPS, 24.41 MiB/s [2024-11-20T08:05:56.550Z] 6263.67 IOPS, 24.47 MiB/s [2024-11-20T08:05:57.934Z] 6257.25 IOPS, 24.44 MiB/s [2024-11-20T08:05:58.876Z] 6233.80 IOPS, 24.35 MiB/s [2024-11-20T08:05:59.825Z] 6227.83 IOPS, 24.33 MiB/s [2024-11-20T08:06:00.769Z] 6197.86 IOPS, 24.21 MiB/s [2024-11-20T08:06:01.709Z] 6232.12 IOPS, 24.34 MiB/s [2024-11-20T08:06:02.646Z] 6213.22 IOPS, 24.27 MiB/s [2024-11-20T08:06:02.646Z] 6224.80 IOPS, 24.32 MiB/s 00:20:37.117 Latency(us) 00:20:37.117 [2024-11-20T08:06:02.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.117 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:37.117 Verification LBA range: start 0x0 length 0x2000 00:20:37.117 TLSTESTn1 : 10.02 6225.43 24.32 0.00 0.00 20523.66 6144.00 25668.27 00:20:37.117 [2024-11-20T08:06:02.646Z] =================================================================================================================== 00:20:37.117 [2024-11-20T08:06:02.646Z] Total : 6225.43 24.32 0.00 0.00 20523.66 6144.00 25668.27 00:20:37.117 { 00:20:37.117 "results": [ 00:20:37.117 { 00:20:37.117 "job": "TLSTESTn1", 00:20:37.117 "core_mask": "0x4", 00:20:37.117 "workload": "verify", 00:20:37.118 "status": "finished", 00:20:37.118 "verify_range": { 00:20:37.118 "start": 0, 00:20:37.118 "length": 8192 00:20:37.118 }, 00:20:37.118 "queue_depth": 128, 00:20:37.118 "io_size": 4096, 00:20:37.118 "runtime": 10.019222, 00:20:37.118 "iops": 6225.433471780543, 00:20:37.118 "mibps": 24.318099499142747, 00:20:37.118 "io_failed": 0, 00:20:37.118 "io_timeout": 0, 00:20:37.118 "avg_latency_us": 20523.661378565856, 00:20:37.118 "min_latency_us": 6144.0, 00:20:37.118 "max_latency_us": 25668.266666666666 00:20:37.118 } 00:20:37.118 ], 00:20:37.118 "core_count": 1 00:20:37.118 } 00:20:37.118 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:37.118 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:37.118 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:37.118 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:37.118 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:37.118 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:37.118 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:37.118 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:37.118 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:37.118 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:37.118 nvmf_trace.0 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 722976 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 722976 ']' 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 722976 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 722976 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 722976' 00:20:37.378 killing process with pid 722976 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 722976 00:20:37.378 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.378 00:20:37.378 Latency(us) 00:20:37.378 [2024-11-20T08:06:02.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.378 [2024-11-20T08:06:02.907Z] =================================================================================================================== 00:20:37.378 [2024-11-20T08:06:02.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 722976 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.378 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.378 rmmod nvme_tcp 00:20:37.378 rmmod nvme_fabrics 00:20:37.378 rmmod nvme_keyring 00:20:37.638 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.638 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:37.638 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:37.638 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 722636 ']' 00:20:37.638 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 722636 00:20:37.638 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 722636 ']' 00:20:37.638 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 722636 00:20:37.638 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:37.638 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.638 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 722636 00:20:37.638 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:37.639 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:37.639 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 722636' 00:20:37.639 killing process with pid 722636 00:20:37.639 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 722636 00:20:37.639 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 722636 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.639 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Yos 00:20:40.182 00:20:40.182 real 0m23.323s 00:20:40.182 user 0m25.017s 00:20:40.182 sys 0m9.713s 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:40.182 ************************************ 00:20:40.182 END TEST nvmf_fips 00:20:40.182 ************************************ 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:40.182 ************************************ 00:20:40.182 START TEST nvmf_control_msg_list 00:20:40.182 ************************************ 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:40.182 * Looking for test storage... 00:20:40.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:40.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.182 --rc genhtml_branch_coverage=1 00:20:40.182 --rc genhtml_function_coverage=1 00:20:40.182 --rc genhtml_legend=1 00:20:40.182 --rc geninfo_all_blocks=1 00:20:40.182 --rc geninfo_unexecuted_blocks=1 00:20:40.182 00:20:40.182 ' 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:40.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.182 --rc genhtml_branch_coverage=1 00:20:40.182 --rc genhtml_function_coverage=1 00:20:40.182 --rc genhtml_legend=1 00:20:40.182 --rc geninfo_all_blocks=1 00:20:40.182 --rc geninfo_unexecuted_blocks=1 00:20:40.182 00:20:40.182 ' 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:40.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.182 --rc genhtml_branch_coverage=1 00:20:40.182 --rc genhtml_function_coverage=1 00:20:40.182 --rc genhtml_legend=1 00:20:40.182 --rc geninfo_all_blocks=1 00:20:40.182 --rc geninfo_unexecuted_blocks=1 00:20:40.182 00:20:40.182 ' 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:40.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.182 --rc genhtml_branch_coverage=1 00:20:40.182 --rc genhtml_function_coverage=1 00:20:40.182 --rc genhtml_legend=1 00:20:40.182 --rc geninfo_all_blocks=1 00:20:40.182 --rc geninfo_unexecuted_blocks=1 00:20:40.182 00:20:40.182 ' 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.182 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:40.183 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:48.323 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:48.324 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:48.324 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:48.324 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:48.324 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:48.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:20:48.324 00:20:48.324 --- 10.0.0.2 ping statistics --- 00:20:48.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.324 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:20:48.324 00:20:48.324 --- 10.0.0.1 ping statistics --- 00:20:48.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.324 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.324 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:48.325 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:48.325 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.325 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:48.325 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=729958 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 729958 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 729958 ']' 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.325 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:48.325 [2024-11-20 09:06:13.109097] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:20:48.325 [2024-11-20 09:06:13.109173] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.325 [2024-11-20 09:06:13.208087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.325 [2024-11-20 09:06:13.259913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.325 [2024-11-20 09:06:13.259969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.325 [2024-11-20 09:06:13.259977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.325 [2024-11-20 09:06:13.259985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.325 [2024-11-20 09:06:13.259991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.325 [2024-11-20 09:06:13.260755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.586 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:48.587 [2024-11-20 09:06:13.971102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.587 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:48.587 Malloc0 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:48.587 [2024-11-20 09:06:14.025484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=730240 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=730241 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=730242 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 730240 00:20:48.587 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:48.848 [2024-11-20 09:06:14.116231] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:48.848 [2024-11-20 09:06:14.116583] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:48.848 [2024-11-20 09:06:14.126324] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:49.791 Initializing NVMe Controllers 00:20:49.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:49.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:49.791 Initialization complete. Launching workers. 00:20:49.791 ======================================================== 00:20:49.791 Latency(us) 00:20:49.791 Device Information : IOPS MiB/s Average min max 00:20:49.791 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2090.00 8.16 478.18 163.87 805.07 00:20:49.791 ======================================================== 00:20:49.791 Total : 2090.00 8.16 478.18 163.87 805.07 00:20:49.791 00:20:49.791 Initializing NVMe Controllers 00:20:49.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:49.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:49.791 Initialization complete. Launching workers. 00:20:49.791 ======================================================== 00:20:49.791 Latency(us) 00:20:49.791 Device Information : IOPS MiB/s Average min max 00:20:49.791 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1413.00 5.52 707.54 306.26 945.35 00:20:49.791 ======================================================== 00:20:49.791 Total : 1413.00 5.52 707.54 306.26 945.35 00:20:49.791 00:20:49.791 Initializing NVMe Controllers 00:20:49.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:49.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:49.791 Initialization complete. Launching workers. 00:20:49.791 ======================================================== 00:20:49.791 Latency(us) 00:20:49.791 Device Information : IOPS MiB/s Average min max 00:20:49.791 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40893.95 40766.48 41016.13 00:20:49.791 ======================================================== 00:20:49.791 Total : 25.00 0.10 40893.95 40766.48 41016.13 00:20:49.791 00:20:49.791 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 730241 00:20:49.791 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 730242 00:20:49.791 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:49.791 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:49.791 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.791 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:49.791 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.791 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:49.791 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.791 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.791 rmmod nvme_tcp 00:20:50.052 rmmod nvme_fabrics 00:20:50.052 rmmod nvme_keyring 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 729958 ']' 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 729958 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 729958 ']' 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 729958 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 729958 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 729958' 00:20:50.052 killing process with pid 729958 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 729958 00:20:50.052 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 729958 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.314 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.228 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:52.228 00:20:52.228 real 0m12.431s 00:20:52.228 user 0m7.883s 00:20:52.228 sys 0m6.661s 00:20:52.228 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.228 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.228 ************************************ 00:20:52.228 END TEST nvmf_control_msg_list 00:20:52.228 ************************************ 00:20:52.228 09:06:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:52.228 09:06:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:52.228 09:06:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.228 09:06:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:52.490 ************************************ 00:20:52.490 START TEST nvmf_wait_for_buf 00:20:52.490 ************************************ 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:52.490 * Looking for test storage... 00:20:52.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:52.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.490 --rc genhtml_branch_coverage=1 00:20:52.490 --rc genhtml_function_coverage=1 00:20:52.490 --rc genhtml_legend=1 00:20:52.490 --rc geninfo_all_blocks=1 00:20:52.490 --rc geninfo_unexecuted_blocks=1 00:20:52.490 00:20:52.490 ' 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:52.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.490 --rc genhtml_branch_coverage=1 00:20:52.490 --rc genhtml_function_coverage=1 00:20:52.490 --rc genhtml_legend=1 00:20:52.490 --rc geninfo_all_blocks=1 00:20:52.490 --rc geninfo_unexecuted_blocks=1 00:20:52.490 00:20:52.490 ' 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:52.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.490 --rc genhtml_branch_coverage=1 00:20:52.490 --rc genhtml_function_coverage=1 00:20:52.490 --rc genhtml_legend=1 00:20:52.490 --rc geninfo_all_blocks=1 00:20:52.490 --rc geninfo_unexecuted_blocks=1 00:20:52.490 00:20:52.490 ' 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:52.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.490 --rc genhtml_branch_coverage=1 00:20:52.490 --rc genhtml_function_coverage=1 00:20:52.490 --rc genhtml_legend=1 00:20:52.490 --rc geninfo_all_blocks=1 00:20:52.490 --rc geninfo_unexecuted_blocks=1 00:20:52.490 00:20:52.490 ' 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.490 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.490 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.490 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.490 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.490 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.490 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.490 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.490 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:52.490 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:52.490 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.491 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.491 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.491 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.491 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.491 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.491 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:52.491 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.491 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:52.491 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:52.491 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:52.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:52.751 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:52.752 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:00.990 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:00.990 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:00.990 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:00.990 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.990 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:21:00.991 00:21:00.991 --- 10.0.0.2 ping statistics --- 00:21:00.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.991 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:21:00.991 00:21:00.991 --- 10.0.0.1 ping statistics --- 00:21:00.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.991 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=734608 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 734608 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 734608 ']' 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.991 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.991 [2024-11-20 09:06:25.611245] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:21:00.991 [2024-11-20 09:06:25.611310] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.991 [2024-11-20 09:06:25.712261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.991 [2024-11-20 09:06:25.764474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.991 [2024-11-20 09:06:25.764529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.991 [2024-11-20 09:06:25.764538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.991 [2024-11-20 09:06:25.764546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.991 [2024-11-20 09:06:25.764552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.991 [2024-11-20 09:06:25.765342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.991 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:01.254 Malloc0 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:01.254 [2024-11-20 09:06:26.610066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:01.254 [2024-11-20 09:06:26.646382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.254 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:01.254 [2024-11-20 09:06:26.760280] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:03.167 Initializing NVMe Controllers 00:21:03.167 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:03.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:03.167 Initialization complete. Launching workers. 00:21:03.167 ======================================================== 00:21:03.167 Latency(us) 00:21:03.167 Device Information : IOPS MiB/s Average min max 00:21:03.167 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.99 8020.19 63861.24 00:21:03.167 ======================================================== 00:21:03.167 Total : 129.00 16.12 32294.99 8020.19 63861.24 00:21:03.167 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.167 rmmod nvme_tcp 00:21:03.167 rmmod nvme_fabrics 00:21:03.167 rmmod nvme_keyring 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 734608 ']' 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 734608 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 734608 ']' 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 734608 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 734608 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 734608' 00:21:03.167 killing process with pid 734608 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 734608 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 734608 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.167 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.168 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.168 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.712 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.712 00:21:05.712 real 0m12.935s 00:21:05.712 user 0m5.228s 00:21:05.712 sys 0m6.317s 00:21:05.712 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.712 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:05.712 ************************************ 00:21:05.712 END TEST nvmf_wait_for_buf 00:21:05.712 ************************************ 00:21:05.712 09:06:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:05.712 09:06:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:05.712 09:06:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:05.712 09:06:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:05.712 09:06:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.712 09:06:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:13.847 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:13.847 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.847 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:13.848 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:13.848 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.848 09:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:13.848 ************************************ 00:21:13.848 START TEST nvmf_perf_adq 00:21:13.848 ************************************ 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:13.848 * Looking for test storage... 00:21:13.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:13.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.848 --rc genhtml_branch_coverage=1 00:21:13.848 --rc genhtml_function_coverage=1 00:21:13.848 --rc genhtml_legend=1 00:21:13.848 --rc geninfo_all_blocks=1 00:21:13.848 --rc geninfo_unexecuted_blocks=1 00:21:13.848 00:21:13.848 ' 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:13.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.848 --rc genhtml_branch_coverage=1 00:21:13.848 --rc genhtml_function_coverage=1 00:21:13.848 --rc genhtml_legend=1 00:21:13.848 --rc geninfo_all_blocks=1 00:21:13.848 --rc geninfo_unexecuted_blocks=1 00:21:13.848 00:21:13.848 ' 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:13.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.848 --rc genhtml_branch_coverage=1 00:21:13.848 --rc genhtml_function_coverage=1 00:21:13.848 --rc genhtml_legend=1 00:21:13.848 --rc geninfo_all_blocks=1 00:21:13.848 --rc geninfo_unexecuted_blocks=1 00:21:13.848 00:21:13.848 ' 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:13.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.848 --rc genhtml_branch_coverage=1 00:21:13.848 --rc genhtml_function_coverage=1 00:21:13.848 --rc genhtml_legend=1 00:21:13.848 --rc geninfo_all_blocks=1 00:21:13.848 --rc geninfo_unexecuted_blocks=1 00:21:13.848 00:21:13.848 ' 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:13.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:13.848 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:20.428 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:20.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:20.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:20.429 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:20.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:20.429 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:21.812 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:23.724 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:29.016 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:29.017 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:29.017 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:29.017 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:29.017 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:29.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:21:29.017 00:21:29.017 --- 10.0.0.2 ping statistics --- 00:21:29.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.017 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:21:29.017 00:21:29.017 --- 10.0.0.1 ping statistics --- 00:21:29.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.017 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:29.017 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=744853 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 744853 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 744853 ']' 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.279 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.279 [2024-11-20 09:06:54.620912] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:21:29.279 [2024-11-20 09:06:54.620975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.279 [2024-11-20 09:06:54.709443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.279 [2024-11-20 09:06:54.764796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.279 [2024-11-20 09:06:54.764849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.279 [2024-11-20 09:06:54.764859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.279 [2024-11-20 09:06:54.764867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.279 [2024-11-20 09:06:54.764874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.279 [2024-11-20 09:06:54.768193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.279 [2024-11-20 09:06:54.768531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.279 [2024-11-20 09:06:54.768692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.279 [2024-11-20 09:06:54.768695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.224 [2024-11-20 09:06:55.636873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.224 Malloc1 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.224 [2024-11-20 09:06:55.719675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=745185 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:30.224 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:32.772 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:32.772 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.772 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.772 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.772 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:32.772 "tick_rate": 2400000000, 00:21:32.772 "poll_groups": [ 00:21:32.772 { 00:21:32.772 "name": "nvmf_tgt_poll_group_000", 00:21:32.772 "admin_qpairs": 1, 00:21:32.772 "io_qpairs": 1, 00:21:32.772 "current_admin_qpairs": 1, 00:21:32.772 "current_io_qpairs": 1, 00:21:32.772 "pending_bdev_io": 0, 00:21:32.772 "completed_nvme_io": 16582, 00:21:32.772 "transports": [ 00:21:32.772 { 00:21:32.772 "trtype": "TCP" 00:21:32.772 } 00:21:32.772 ] 00:21:32.772 }, 00:21:32.772 { 00:21:32.772 "name": "nvmf_tgt_poll_group_001", 00:21:32.772 "admin_qpairs": 0, 00:21:32.772 "io_qpairs": 1, 00:21:32.772 "current_admin_qpairs": 0, 00:21:32.772 "current_io_qpairs": 1, 00:21:32.772 "pending_bdev_io": 0, 00:21:32.772 "completed_nvme_io": 16437, 00:21:32.772 "transports": [ 00:21:32.772 { 00:21:32.772 "trtype": "TCP" 00:21:32.772 } 00:21:32.772 ] 00:21:32.772 }, 00:21:32.772 { 00:21:32.772 "name": "nvmf_tgt_poll_group_002", 00:21:32.772 "admin_qpairs": 0, 00:21:32.772 "io_qpairs": 1, 00:21:32.772 "current_admin_qpairs": 0, 00:21:32.772 "current_io_qpairs": 1, 00:21:32.772 "pending_bdev_io": 0, 00:21:32.772 "completed_nvme_io": 18903, 00:21:32.772 "transports": [ 00:21:32.772 { 00:21:32.772 "trtype": "TCP" 00:21:32.772 } 00:21:32.772 ] 00:21:32.772 }, 00:21:32.772 { 00:21:32.772 "name": "nvmf_tgt_poll_group_003", 00:21:32.772 "admin_qpairs": 0, 00:21:32.772 "io_qpairs": 1, 00:21:32.772 "current_admin_qpairs": 0, 00:21:32.772 "current_io_qpairs": 1, 00:21:32.772 "pending_bdev_io": 0, 00:21:32.772 "completed_nvme_io": 16283, 00:21:32.772 "transports": [ 00:21:32.772 { 00:21:32.772 "trtype": "TCP" 00:21:32.772 } 00:21:32.772 ] 00:21:32.772 } 00:21:32.772 ] 00:21:32.772 }' 00:21:32.772 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:32.772 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:32.772 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:32.772 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:32.772 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 745185 00:21:40.915 Initializing NVMe Controllers 00:21:40.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:40.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:40.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:40.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:40.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:40.915 Initialization complete. Launching workers. 00:21:40.915 ======================================================== 00:21:40.915 Latency(us) 00:21:40.915 Device Information : IOPS MiB/s Average min max 00:21:40.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13896.59 54.28 4605.56 1284.50 12199.18 00:21:40.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12866.51 50.26 4973.40 1263.05 13190.81 00:21:40.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12729.91 49.73 5027.92 1326.50 12178.77 00:21:40.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12852.11 50.20 4980.17 1347.60 12903.77 00:21:40.915 ======================================================== 00:21:40.915 Total : 52345.12 204.47 4890.67 1263.05 13190.81 00:21:40.915 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:40.915 rmmod nvme_tcp 00:21:40.915 rmmod nvme_fabrics 00:21:40.915 rmmod nvme_keyring 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 744853 ']' 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 744853 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 744853 ']' 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 744853 00:21:40.915 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:40.915 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.915 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 744853 00:21:40.915 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:40.915 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:40.915 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 744853' 00:21:40.915 killing process with pid 744853 00:21:40.915 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 744853 00:21:40.915 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 744853 00:21:40.915 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:40.916 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:40.916 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:40.916 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:40.916 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:40.916 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:40.916 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:40.916 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:40.916 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:40.916 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.916 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.916 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.829 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:42.829 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:42.829 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:42.829 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:44.744 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:46.658 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.952 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:51.953 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:51.953 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:51.953 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:51.953 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.953 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:51.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:21:51.953 00:21:51.953 --- 10.0.0.2 ping statistics --- 00:21:51.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.953 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:21:51.953 00:21:51.953 --- 10.0.0.1 ping statistics --- 00:21:51.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.953 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:51.953 net.core.busy_poll = 1 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:51.953 net.core.busy_read = 1 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.953 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=749649 00:21:51.954 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 749649 00:21:51.954 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:51.954 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 749649 ']' 00:21:51.954 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.954 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.954 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.954 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.954 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.215 [2024-11-20 09:07:17.500556] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:21:52.215 [2024-11-20 09:07:17.500626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.215 [2024-11-20 09:07:17.602498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.215 [2024-11-20 09:07:17.656774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.215 [2024-11-20 09:07:17.656829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.215 [2024-11-20 09:07:17.656838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.215 [2024-11-20 09:07:17.656845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.215 [2024-11-20 09:07:17.656852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.215 [2024-11-20 09:07:17.659235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.215 [2024-11-20 09:07:17.659445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.215 [2024-11-20 09:07:17.659578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.215 [2024-11-20 09:07:17.659579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 [2024-11-20 09:07:18.516036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 Malloc1 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.159 [2024-11-20 09:07:18.591315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=750002 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:53.159 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:55.706 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:55.707 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.707 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.707 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.707 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:55.707 "tick_rate": 2400000000, 00:21:55.707 "poll_groups": [ 00:21:55.707 { 00:21:55.707 "name": "nvmf_tgt_poll_group_000", 00:21:55.707 "admin_qpairs": 1, 00:21:55.707 "io_qpairs": 4, 00:21:55.707 "current_admin_qpairs": 1, 00:21:55.707 "current_io_qpairs": 4, 00:21:55.707 "pending_bdev_io": 0, 00:21:55.707 "completed_nvme_io": 36031, 00:21:55.707 "transports": [ 00:21:55.707 { 00:21:55.707 "trtype": "TCP" 00:21:55.707 } 00:21:55.707 ] 00:21:55.707 }, 00:21:55.707 { 00:21:55.707 "name": "nvmf_tgt_poll_group_001", 00:21:55.707 "admin_qpairs": 0, 00:21:55.707 "io_qpairs": 0, 00:21:55.707 "current_admin_qpairs": 0, 00:21:55.707 "current_io_qpairs": 0, 00:21:55.707 "pending_bdev_io": 0, 00:21:55.707 "completed_nvme_io": 0, 00:21:55.707 "transports": [ 00:21:55.707 { 00:21:55.707 "trtype": "TCP" 00:21:55.707 } 00:21:55.707 ] 00:21:55.707 }, 00:21:55.707 { 00:21:55.707 "name": "nvmf_tgt_poll_group_002", 00:21:55.707 "admin_qpairs": 0, 00:21:55.707 "io_qpairs": 0, 00:21:55.707 "current_admin_qpairs": 0, 00:21:55.707 "current_io_qpairs": 0, 00:21:55.707 "pending_bdev_io": 0, 00:21:55.707 "completed_nvme_io": 0, 00:21:55.707 "transports": [ 00:21:55.707 { 00:21:55.707 "trtype": "TCP" 00:21:55.707 } 00:21:55.707 ] 00:21:55.707 }, 00:21:55.707 { 00:21:55.707 "name": "nvmf_tgt_poll_group_003", 00:21:55.707 "admin_qpairs": 0, 00:21:55.707 "io_qpairs": 0, 00:21:55.707 "current_admin_qpairs": 0, 00:21:55.707 "current_io_qpairs": 0, 00:21:55.707 "pending_bdev_io": 0, 00:21:55.707 "completed_nvme_io": 0, 00:21:55.707 "transports": [ 00:21:55.707 { 00:21:55.707 "trtype": "TCP" 00:21:55.707 } 00:21:55.707 ] 00:21:55.707 } 00:21:55.707 ] 00:21:55.707 }' 00:21:55.707 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:55.707 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:55.707 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:21:55.707 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:21:55.707 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 750002 00:22:03.853 Initializing NVMe Controllers 00:22:03.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:03.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:03.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:03.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:03.853 Initialization complete. Launching workers. 00:22:03.853 ======================================================== 00:22:03.853 Latency(us) 00:22:03.853 Device Information : IOPS MiB/s Average min max 00:22:03.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8888.60 34.72 7214.14 1047.32 60567.56 00:22:03.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5591.10 21.84 11447.32 1846.09 55293.64 00:22:03.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5634.30 22.01 11360.16 1375.65 56055.42 00:22:03.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5305.80 20.73 12061.99 1393.11 60403.02 00:22:03.853 ======================================================== 00:22:03.853 Total : 25419.79 99.30 10076.07 1047.32 60567.56 00:22:03.853 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:03.853 rmmod nvme_tcp 00:22:03.853 rmmod nvme_fabrics 00:22:03.853 rmmod nvme_keyring 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 749649 ']' 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 749649 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 749649 ']' 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 749649 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 749649 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 749649' 00:22:03.853 killing process with pid 749649 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 749649 00:22:03.853 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 749649 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.853 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:07.162 00:22:07.162 real 0m54.125s 00:22:07.162 user 2m50.173s 00:22:07.162 sys 0m11.248s 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.162 ************************************ 00:22:07.162 END TEST nvmf_perf_adq 00:22:07.162 ************************************ 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:07.162 ************************************ 00:22:07.162 START TEST nvmf_shutdown 00:22:07.162 ************************************ 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:07.162 * Looking for test storage... 00:22:07.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:07.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.162 --rc genhtml_branch_coverage=1 00:22:07.162 --rc genhtml_function_coverage=1 00:22:07.162 --rc genhtml_legend=1 00:22:07.162 --rc geninfo_all_blocks=1 00:22:07.162 --rc geninfo_unexecuted_blocks=1 00:22:07.162 00:22:07.162 ' 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:07.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.162 --rc genhtml_branch_coverage=1 00:22:07.162 --rc genhtml_function_coverage=1 00:22:07.162 --rc genhtml_legend=1 00:22:07.162 --rc geninfo_all_blocks=1 00:22:07.162 --rc geninfo_unexecuted_blocks=1 00:22:07.162 00:22:07.162 ' 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:07.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.162 --rc genhtml_branch_coverage=1 00:22:07.162 --rc genhtml_function_coverage=1 00:22:07.162 --rc genhtml_legend=1 00:22:07.162 --rc geninfo_all_blocks=1 00:22:07.162 --rc geninfo_unexecuted_blocks=1 00:22:07.162 00:22:07.162 ' 00:22:07.162 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:07.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.163 --rc genhtml_branch_coverage=1 00:22:07.163 --rc genhtml_function_coverage=1 00:22:07.163 --rc genhtml_legend=1 00:22:07.163 --rc geninfo_all_blocks=1 00:22:07.163 --rc geninfo_unexecuted_blocks=1 00:22:07.163 00:22:07.163 ' 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:07.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:07.163 ************************************ 00:22:07.163 START TEST nvmf_shutdown_tc1 00:22:07.163 ************************************ 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:07.163 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.307 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:15.308 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:15.308 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:15.308 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:15.308 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.308 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:22:15.309 00:22:15.309 --- 10.0.0.2 ping statistics --- 00:22:15.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.309 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:22:15.309 00:22:15.309 --- 10.0.0.1 ping statistics --- 00:22:15.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.309 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=756471 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 756471 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 756471 ']' 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.309 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.309 [2024-11-20 09:07:40.039269] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:22:15.309 [2024-11-20 09:07:40.039343] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.309 [2024-11-20 09:07:40.142339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.309 [2024-11-20 09:07:40.194643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.309 [2024-11-20 09:07:40.194699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.309 [2024-11-20 09:07:40.194708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.309 [2024-11-20 09:07:40.194716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.309 [2024-11-20 09:07:40.194722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.309 [2024-11-20 09:07:40.196737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.309 [2024-11-20 09:07:40.196904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.309 [2024-11-20 09:07:40.197066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.309 [2024-11-20 09:07:40.197067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.579 [2024-11-20 09:07:40.926131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.579 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.579 Malloc1 00:22:15.579 [2024-11-20 09:07:41.060338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.579 Malloc2 00:22:15.887 Malloc3 00:22:15.887 Malloc4 00:22:15.887 Malloc5 00:22:15.887 Malloc6 00:22:15.887 Malloc7 00:22:15.887 Malloc8 00:22:16.238 Malloc9 00:22:16.238 Malloc10 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=756854 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 756854 /var/tmp/bdevperf.sock 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 756854 ']' 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.238 { 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme$subsystem", 00:22:16.238 "trtype": "$TEST_TRANSPORT", 00:22:16.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "$NVMF_PORT", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.238 "hdgst": ${hdgst:-false}, 00:22:16.238 "ddgst": ${ddgst:-false} 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 } 00:22:16.238 EOF 00:22:16.238 )") 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.238 { 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme$subsystem", 00:22:16.238 "trtype": "$TEST_TRANSPORT", 00:22:16.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "$NVMF_PORT", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.238 "hdgst": ${hdgst:-false}, 00:22:16.238 "ddgst": ${ddgst:-false} 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 } 00:22:16.238 EOF 00:22:16.238 )") 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.238 { 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme$subsystem", 00:22:16.238 "trtype": "$TEST_TRANSPORT", 00:22:16.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "$NVMF_PORT", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.238 "hdgst": ${hdgst:-false}, 00:22:16.238 "ddgst": ${ddgst:-false} 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 } 00:22:16.238 EOF 00:22:16.238 )") 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.238 { 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme$subsystem", 00:22:16.238 "trtype": "$TEST_TRANSPORT", 00:22:16.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "$NVMF_PORT", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.238 "hdgst": ${hdgst:-false}, 00:22:16.238 "ddgst": ${ddgst:-false} 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 } 00:22:16.238 EOF 00:22:16.238 )") 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.238 { 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme$subsystem", 00:22:16.238 "trtype": "$TEST_TRANSPORT", 00:22:16.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "$NVMF_PORT", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.238 "hdgst": ${hdgst:-false}, 00:22:16.238 "ddgst": ${ddgst:-false} 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 } 00:22:16.238 EOF 00:22:16.238 )") 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.238 { 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme$subsystem", 00:22:16.238 "trtype": "$TEST_TRANSPORT", 00:22:16.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "$NVMF_PORT", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.238 "hdgst": ${hdgst:-false}, 00:22:16.238 "ddgst": ${ddgst:-false} 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 } 00:22:16.238 EOF 00:22:16.238 )") 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:16.238 [2024-11-20 09:07:41.574091] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:22:16.238 [2024-11-20 09:07:41.574173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.238 { 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme$subsystem", 00:22:16.238 "trtype": "$TEST_TRANSPORT", 00:22:16.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "$NVMF_PORT", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.238 "hdgst": ${hdgst:-false}, 00:22:16.238 "ddgst": ${ddgst:-false} 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 } 00:22:16.238 EOF 00:22:16.238 )") 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.238 { 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme$subsystem", 00:22:16.238 "trtype": "$TEST_TRANSPORT", 00:22:16.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "$NVMF_PORT", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.238 "hdgst": ${hdgst:-false}, 00:22:16.238 "ddgst": ${ddgst:-false} 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 } 00:22:16.238 EOF 00:22:16.238 )") 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.238 { 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme$subsystem", 00:22:16.238 "trtype": "$TEST_TRANSPORT", 00:22:16.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "$NVMF_PORT", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.238 "hdgst": ${hdgst:-false}, 00:22:16.238 "ddgst": ${ddgst:-false} 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 } 00:22:16.238 EOF 00:22:16.238 )") 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.238 { 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme$subsystem", 00:22:16.238 "trtype": "$TEST_TRANSPORT", 00:22:16.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "$NVMF_PORT", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.238 "hdgst": ${hdgst:-false}, 00:22:16.238 "ddgst": ${ddgst:-false} 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 } 00:22:16.238 EOF 00:22:16.238 )") 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:16.238 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme1", 00:22:16.238 "trtype": "tcp", 00:22:16.238 "traddr": "10.0.0.2", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "4420", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.238 "hdgst": false, 00:22:16.238 "ddgst": false 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 },{ 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme2", 00:22:16.238 "trtype": "tcp", 00:22:16.238 "traddr": "10.0.0.2", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "4420", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:16.238 "hdgst": false, 00:22:16.238 "ddgst": false 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 },{ 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme3", 00:22:16.238 "trtype": "tcp", 00:22:16.238 "traddr": "10.0.0.2", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "4420", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:16.238 "hdgst": false, 00:22:16.238 "ddgst": false 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 },{ 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme4", 00:22:16.238 "trtype": "tcp", 00:22:16.238 "traddr": "10.0.0.2", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "4420", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:16.238 "hdgst": false, 00:22:16.238 "ddgst": false 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 },{ 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme5", 00:22:16.238 "trtype": "tcp", 00:22:16.238 "traddr": "10.0.0.2", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "4420", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:16.238 "hdgst": false, 00:22:16.238 "ddgst": false 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 },{ 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme6", 00:22:16.238 "trtype": "tcp", 00:22:16.238 "traddr": "10.0.0.2", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "4420", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:16.238 "hdgst": false, 00:22:16.238 "ddgst": false 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 },{ 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme7", 00:22:16.238 "trtype": "tcp", 00:22:16.238 "traddr": "10.0.0.2", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "4420", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:16.238 "hdgst": false, 00:22:16.238 "ddgst": false 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 },{ 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme8", 00:22:16.238 "trtype": "tcp", 00:22:16.238 "traddr": "10.0.0.2", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "4420", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:16.238 "hdgst": false, 00:22:16.238 "ddgst": false 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 },{ 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme9", 00:22:16.238 "trtype": "tcp", 00:22:16.238 "traddr": "10.0.0.2", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "4420", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:16.238 "hdgst": false, 00:22:16.238 "ddgst": false 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 },{ 00:22:16.238 "params": { 00:22:16.238 "name": "Nvme10", 00:22:16.238 "trtype": "tcp", 00:22:16.238 "traddr": "10.0.0.2", 00:22:16.238 "adrfam": "ipv4", 00:22:16.238 "trsvcid": "4420", 00:22:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:16.238 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:16.238 "hdgst": false, 00:22:16.238 "ddgst": false 00:22:16.238 }, 00:22:16.238 "method": "bdev_nvme_attach_controller" 00:22:16.238 }' 00:22:16.238 [2024-11-20 09:07:41.670713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.238 [2024-11-20 09:07:41.723296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.624 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.624 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:17.624 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:17.624 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.624 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.624 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.624 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 756854 00:22:17.624 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:17.624 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:18.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 756854 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 756471 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.567 { 00:22:18.567 "params": { 00:22:18.567 "name": "Nvme$subsystem", 00:22:18.567 "trtype": "$TEST_TRANSPORT", 00:22:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.567 "adrfam": "ipv4", 00:22:18.567 "trsvcid": "$NVMF_PORT", 00:22:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.567 "hdgst": ${hdgst:-false}, 00:22:18.567 "ddgst": ${ddgst:-false} 00:22:18.567 }, 00:22:18.567 "method": "bdev_nvme_attach_controller" 00:22:18.567 } 00:22:18.567 EOF 00:22:18.567 )") 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.567 { 00:22:18.567 "params": { 00:22:18.567 "name": "Nvme$subsystem", 00:22:18.567 "trtype": "$TEST_TRANSPORT", 00:22:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.567 "adrfam": "ipv4", 00:22:18.567 "trsvcid": "$NVMF_PORT", 00:22:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.567 "hdgst": ${hdgst:-false}, 00:22:18.567 "ddgst": ${ddgst:-false} 00:22:18.567 }, 00:22:18.567 "method": "bdev_nvme_attach_controller" 00:22:18.567 } 00:22:18.567 EOF 00:22:18.567 )") 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.567 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.567 { 00:22:18.567 "params": { 00:22:18.567 "name": "Nvme$subsystem", 00:22:18.567 "trtype": "$TEST_TRANSPORT", 00:22:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.567 "adrfam": "ipv4", 00:22:18.567 "trsvcid": "$NVMF_PORT", 00:22:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.567 "hdgst": ${hdgst:-false}, 00:22:18.567 "ddgst": ${ddgst:-false} 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 } 00:22:18.568 EOF 00:22:18.568 )") 00:22:18.568 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:18.568 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.568 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.568 { 00:22:18.568 "params": { 00:22:18.568 "name": "Nvme$subsystem", 00:22:18.568 "trtype": "$TEST_TRANSPORT", 00:22:18.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.568 "adrfam": "ipv4", 00:22:18.568 "trsvcid": "$NVMF_PORT", 00:22:18.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.568 "hdgst": ${hdgst:-false}, 00:22:18.568 "ddgst": ${ddgst:-false} 00:22:18.568 }, 00:22:18.568 "method": "bdev_nvme_attach_controller" 00:22:18.568 } 00:22:18.568 EOF 00:22:18.568 )") 00:22:18.568 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:18.829 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.829 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.829 { 00:22:18.829 "params": { 00:22:18.829 "name": "Nvme$subsystem", 00:22:18.829 "trtype": "$TEST_TRANSPORT", 00:22:18.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.829 "adrfam": "ipv4", 00:22:18.829 "trsvcid": "$NVMF_PORT", 00:22:18.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.829 "hdgst": ${hdgst:-false}, 00:22:18.829 "ddgst": ${ddgst:-false} 00:22:18.829 }, 00:22:18.829 "method": "bdev_nvme_attach_controller" 00:22:18.829 } 00:22:18.829 EOF 00:22:18.829 )") 00:22:18.829 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:18.829 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.829 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.829 { 00:22:18.829 "params": { 00:22:18.829 "name": "Nvme$subsystem", 00:22:18.829 "trtype": "$TEST_TRANSPORT", 00:22:18.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.829 "adrfam": "ipv4", 00:22:18.829 "trsvcid": "$NVMF_PORT", 00:22:18.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.829 "hdgst": ${hdgst:-false}, 00:22:18.829 "ddgst": ${ddgst:-false} 00:22:18.829 }, 00:22:18.829 "method": "bdev_nvme_attach_controller" 00:22:18.829 } 00:22:18.830 EOF 00:22:18.830 )") 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:18.830 [2024-11-20 09:07:44.110790] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:22:18.830 [2024-11-20 09:07:44.110844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757236 ] 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.830 { 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme$subsystem", 00:22:18.830 "trtype": "$TEST_TRANSPORT", 00:22:18.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "$NVMF_PORT", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.830 "hdgst": ${hdgst:-false}, 00:22:18.830 "ddgst": ${ddgst:-false} 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 } 00:22:18.830 EOF 00:22:18.830 )") 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.830 { 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme$subsystem", 00:22:18.830 "trtype": "$TEST_TRANSPORT", 00:22:18.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "$NVMF_PORT", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.830 "hdgst": ${hdgst:-false}, 00:22:18.830 "ddgst": ${ddgst:-false} 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 } 00:22:18.830 EOF 00:22:18.830 )") 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.830 { 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme$subsystem", 00:22:18.830 "trtype": "$TEST_TRANSPORT", 00:22:18.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "$NVMF_PORT", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.830 "hdgst": ${hdgst:-false}, 00:22:18.830 "ddgst": ${ddgst:-false} 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 } 00:22:18.830 EOF 00:22:18.830 )") 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.830 { 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme$subsystem", 00:22:18.830 "trtype": "$TEST_TRANSPORT", 00:22:18.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "$NVMF_PORT", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.830 "hdgst": ${hdgst:-false}, 00:22:18.830 "ddgst": ${ddgst:-false} 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 } 00:22:18.830 EOF 00:22:18.830 )") 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:18.830 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme1", 00:22:18.830 "trtype": "tcp", 00:22:18.830 "traddr": "10.0.0.2", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "4420", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:18.830 "hdgst": false, 00:22:18.830 "ddgst": false 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 },{ 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme2", 00:22:18.830 "trtype": "tcp", 00:22:18.830 "traddr": "10.0.0.2", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "4420", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:18.830 "hdgst": false, 00:22:18.830 "ddgst": false 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 },{ 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme3", 00:22:18.830 "trtype": "tcp", 00:22:18.830 "traddr": "10.0.0.2", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "4420", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:18.830 "hdgst": false, 00:22:18.830 "ddgst": false 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 },{ 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme4", 00:22:18.830 "trtype": "tcp", 00:22:18.830 "traddr": "10.0.0.2", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "4420", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:18.830 "hdgst": false, 00:22:18.830 "ddgst": false 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 },{ 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme5", 00:22:18.830 "trtype": "tcp", 00:22:18.830 "traddr": "10.0.0.2", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "4420", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:18.830 "hdgst": false, 00:22:18.830 "ddgst": false 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 },{ 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme6", 00:22:18.830 "trtype": "tcp", 00:22:18.830 "traddr": "10.0.0.2", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "4420", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:18.830 "hdgst": false, 00:22:18.830 "ddgst": false 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 },{ 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme7", 00:22:18.830 "trtype": "tcp", 00:22:18.830 "traddr": "10.0.0.2", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "4420", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:18.830 "hdgst": false, 00:22:18.830 "ddgst": false 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 },{ 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme8", 00:22:18.830 "trtype": "tcp", 00:22:18.830 "traddr": "10.0.0.2", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "4420", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:18.830 "hdgst": false, 00:22:18.830 "ddgst": false 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 },{ 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme9", 00:22:18.830 "trtype": "tcp", 00:22:18.830 "traddr": "10.0.0.2", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "4420", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:18.830 "hdgst": false, 00:22:18.830 "ddgst": false 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.830 },{ 00:22:18.830 "params": { 00:22:18.830 "name": "Nvme10", 00:22:18.830 "trtype": "tcp", 00:22:18.830 "traddr": "10.0.0.2", 00:22:18.830 "adrfam": "ipv4", 00:22:18.830 "trsvcid": "4420", 00:22:18.830 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:18.830 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:18.830 "hdgst": false, 00:22:18.830 "ddgst": false 00:22:18.830 }, 00:22:18.830 "method": "bdev_nvme_attach_controller" 00:22:18.831 }' 00:22:18.831 [2024-11-20 09:07:44.199399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.831 [2024-11-20 09:07:44.235126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.216 Running I/O for 1 seconds... 00:22:21.421 1800.00 IOPS, 112.50 MiB/s 00:22:21.421 Latency(us) 00:22:21.421 [2024-11-20T08:07:46.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.421 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.421 Verification LBA range: start 0x0 length 0x400 00:22:21.421 Nvme1n1 : 1.11 230.06 14.38 0.00 0.00 274782.51 20097.71 249910.61 00:22:21.421 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.421 Verification LBA range: start 0x0 length 0x400 00:22:21.421 Nvme2n1 : 1.07 238.35 14.90 0.00 0.00 260850.77 41069.23 219327.15 00:22:21.421 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.421 Verification LBA range: start 0x0 length 0x400 00:22:21.421 Nvme3n1 : 1.08 237.88 14.87 0.00 0.00 256538.67 15073.28 228939.09 00:22:21.421 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.421 Verification LBA range: start 0x0 length 0x400 00:22:21.421 Nvme4n1 : 1.09 239.98 15.00 0.00 0.00 247593.81 3345.07 246415.36 00:22:21.421 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.421 Verification LBA range: start 0x0 length 0x400 00:22:21.421 Nvme5n1 : 1.08 236.98 14.81 0.00 0.00 247928.75 13161.81 255153.49 00:22:21.421 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.421 Verification LBA range: start 0x0 length 0x400 00:22:21.421 Nvme6n1 : 1.13 227.35 14.21 0.00 0.00 254484.69 20206.93 253405.87 00:22:21.421 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.421 Verification LBA range: start 0x0 length 0x400 00:22:21.421 Nvme7n1 : 1.12 230.89 14.43 0.00 0.00 245152.49 1221.97 228939.09 00:22:21.421 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.421 Verification LBA range: start 0x0 length 0x400 00:22:21.421 Nvme8n1 : 1.18 270.05 16.88 0.00 0.00 207463.25 15837.87 232434.35 00:22:21.421 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.421 Verification LBA range: start 0x0 length 0x400 00:22:21.421 Nvme9n1 : 1.19 272.22 17.01 0.00 0.00 202210.68 1631.57 274377.39 00:22:21.421 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.421 Verification LBA range: start 0x0 length 0x400 00:22:21.421 Nvme10n1 : 1.20 266.99 16.69 0.00 0.00 202517.25 10922.67 267386.88 00:22:21.421 [2024-11-20T08:07:46.950Z] =================================================================================================================== 00:22:21.421 [2024-11-20T08:07:46.950Z] Total : 2450.76 153.17 0.00 0.00 237411.42 1221.97 274377.39 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.421 rmmod nvme_tcp 00:22:21.421 rmmod nvme_fabrics 00:22:21.421 rmmod nvme_keyring 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 756471 ']' 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 756471 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 756471 ']' 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 756471 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.421 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 756471 00:22:21.683 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:21.683 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:21.683 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 756471' 00:22:21.683 killing process with pid 756471 00:22:21.683 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 756471 00:22:21.683 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 756471 00:22:21.683 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:21.683 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:21.683 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:21.683 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:21.683 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:21.683 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:21.683 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:21.943 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.943 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.943 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.943 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.943 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.857 00:22:23.857 real 0m16.800s 00:22:23.857 user 0m34.089s 00:22:23.857 sys 0m6.872s 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.857 ************************************ 00:22:23.857 END TEST nvmf_shutdown_tc1 00:22:23.857 ************************************ 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:23.857 ************************************ 00:22:23.857 START TEST nvmf_shutdown_tc2 00:22:23.857 ************************************ 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.857 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:24.120 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:24.120 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:24.120 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.120 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:24.121 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:24.121 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:24.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:22:24.383 00:22:24.383 --- 10.0.0.2 ping statistics --- 00:22:24.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.383 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:22:24.383 00:22:24.383 --- 10.0.0.1 ping statistics --- 00:22:24.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.383 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=758489 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 758489 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 758489 ']' 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.383 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.383 [2024-11-20 09:07:49.815542] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:22:24.383 [2024-11-20 09:07:49.815608] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.644 [2024-11-20 09:07:49.911592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.644 [2024-11-20 09:07:49.953303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.644 [2024-11-20 09:07:49.953342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.644 [2024-11-20 09:07:49.953348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.644 [2024-11-20 09:07:49.953353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.644 [2024-11-20 09:07:49.953358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.644 [2024-11-20 09:07:49.955130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.644 [2024-11-20 09:07:49.955293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.644 [2024-11-20 09:07:49.955599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:24.644 [2024-11-20 09:07:49.955600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.215 [2024-11-20 09:07:50.666774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.215 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.476 Malloc1 00:22:25.476 [2024-11-20 09:07:50.778100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.476 Malloc2 00:22:25.476 Malloc3 00:22:25.476 Malloc4 00:22:25.476 Malloc5 00:22:25.476 Malloc6 00:22:25.476 Malloc7 00:22:25.737 Malloc8 00:22:25.737 Malloc9 00:22:25.737 Malloc10 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=758730 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 758730 /var/tmp/bdevperf.sock 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 758730 ']' 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.737 { 00:22:25.737 "params": { 00:22:25.737 "name": "Nvme$subsystem", 00:22:25.737 "trtype": "$TEST_TRANSPORT", 00:22:25.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.737 "adrfam": "ipv4", 00:22:25.737 "trsvcid": "$NVMF_PORT", 00:22:25.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.737 "hdgst": ${hdgst:-false}, 00:22:25.737 "ddgst": ${ddgst:-false} 00:22:25.737 }, 00:22:25.737 "method": "bdev_nvme_attach_controller" 00:22:25.737 } 00:22:25.737 EOF 00:22:25.737 )") 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.737 { 00:22:25.737 "params": { 00:22:25.737 "name": "Nvme$subsystem", 00:22:25.737 "trtype": "$TEST_TRANSPORT", 00:22:25.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.737 "adrfam": "ipv4", 00:22:25.737 "trsvcid": "$NVMF_PORT", 00:22:25.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.737 "hdgst": ${hdgst:-false}, 00:22:25.737 "ddgst": ${ddgst:-false} 00:22:25.737 }, 00:22:25.737 "method": "bdev_nvme_attach_controller" 00:22:25.737 } 00:22:25.737 EOF 00:22:25.737 )") 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.737 { 00:22:25.737 "params": { 00:22:25.737 "name": "Nvme$subsystem", 00:22:25.737 "trtype": "$TEST_TRANSPORT", 00:22:25.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.737 "adrfam": "ipv4", 00:22:25.737 "trsvcid": "$NVMF_PORT", 00:22:25.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.737 "hdgst": ${hdgst:-false}, 00:22:25.737 "ddgst": ${ddgst:-false} 00:22:25.737 }, 00:22:25.737 "method": "bdev_nvme_attach_controller" 00:22:25.737 } 00:22:25.737 EOF 00:22:25.737 )") 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.737 { 00:22:25.737 "params": { 00:22:25.737 "name": "Nvme$subsystem", 00:22:25.737 "trtype": "$TEST_TRANSPORT", 00:22:25.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.737 "adrfam": "ipv4", 00:22:25.737 "trsvcid": "$NVMF_PORT", 00:22:25.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.737 "hdgst": ${hdgst:-false}, 00:22:25.737 "ddgst": ${ddgst:-false} 00:22:25.737 }, 00:22:25.737 "method": "bdev_nvme_attach_controller" 00:22:25.737 } 00:22:25.737 EOF 00:22:25.737 )") 00:22:25.737 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.738 { 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme$subsystem", 00:22:25.738 "trtype": "$TEST_TRANSPORT", 00:22:25.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "$NVMF_PORT", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.738 "hdgst": ${hdgst:-false}, 00:22:25.738 "ddgst": ${ddgst:-false} 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 } 00:22:25.738 EOF 00:22:25.738 )") 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.738 { 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme$subsystem", 00:22:25.738 "trtype": "$TEST_TRANSPORT", 00:22:25.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "$NVMF_PORT", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.738 "hdgst": ${hdgst:-false}, 00:22:25.738 "ddgst": ${ddgst:-false} 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 } 00:22:25.738 EOF 00:22:25.738 )") 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.738 { 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme$subsystem", 00:22:25.738 "trtype": "$TEST_TRANSPORT", 00:22:25.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "$NVMF_PORT", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.738 "hdgst": ${hdgst:-false}, 00:22:25.738 "ddgst": ${ddgst:-false} 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 } 00:22:25.738 EOF 00:22:25.738 )") 00:22:25.738 [2024-11-20 09:07:51.227642] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:22:25.738 [2024-11-20 09:07:51.227695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758730 ] 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.738 { 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme$subsystem", 00:22:25.738 "trtype": "$TEST_TRANSPORT", 00:22:25.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "$NVMF_PORT", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.738 "hdgst": ${hdgst:-false}, 00:22:25.738 "ddgst": ${ddgst:-false} 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 } 00:22:25.738 EOF 00:22:25.738 )") 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.738 { 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme$subsystem", 00:22:25.738 "trtype": "$TEST_TRANSPORT", 00:22:25.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "$NVMF_PORT", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.738 "hdgst": ${hdgst:-false}, 00:22:25.738 "ddgst": ${ddgst:-false} 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 } 00:22:25.738 EOF 00:22:25.738 )") 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.738 { 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme$subsystem", 00:22:25.738 "trtype": "$TEST_TRANSPORT", 00:22:25.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "$NVMF_PORT", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.738 "hdgst": ${hdgst:-false}, 00:22:25.738 "ddgst": ${ddgst:-false} 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 } 00:22:25.738 EOF 00:22:25.738 )") 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:25.738 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme1", 00:22:25.738 "trtype": "tcp", 00:22:25.738 "traddr": "10.0.0.2", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "4420", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.738 "hdgst": false, 00:22:25.738 "ddgst": false 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 },{ 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme2", 00:22:25.738 "trtype": "tcp", 00:22:25.738 "traddr": "10.0.0.2", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "4420", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:25.738 "hdgst": false, 00:22:25.738 "ddgst": false 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 },{ 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme3", 00:22:25.738 "trtype": "tcp", 00:22:25.738 "traddr": "10.0.0.2", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "4420", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:25.738 "hdgst": false, 00:22:25.738 "ddgst": false 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 },{ 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme4", 00:22:25.738 "trtype": "tcp", 00:22:25.738 "traddr": "10.0.0.2", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "4420", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:25.738 "hdgst": false, 00:22:25.738 "ddgst": false 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 },{ 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme5", 00:22:25.738 "trtype": "tcp", 00:22:25.738 "traddr": "10.0.0.2", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "4420", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:25.738 "hdgst": false, 00:22:25.738 "ddgst": false 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 },{ 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme6", 00:22:25.738 "trtype": "tcp", 00:22:25.738 "traddr": "10.0.0.2", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "4420", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:25.738 "hdgst": false, 00:22:25.738 "ddgst": false 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 },{ 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme7", 00:22:25.738 "trtype": "tcp", 00:22:25.738 "traddr": "10.0.0.2", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "4420", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:25.738 "hdgst": false, 00:22:25.738 "ddgst": false 00:22:25.738 }, 00:22:25.738 "method": "bdev_nvme_attach_controller" 00:22:25.738 },{ 00:22:25.738 "params": { 00:22:25.738 "name": "Nvme8", 00:22:25.738 "trtype": "tcp", 00:22:25.738 "traddr": "10.0.0.2", 00:22:25.738 "adrfam": "ipv4", 00:22:25.738 "trsvcid": "4420", 00:22:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:25.738 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:25.738 "hdgst": false, 00:22:25.738 "ddgst": false 00:22:25.739 }, 00:22:25.739 "method": "bdev_nvme_attach_controller" 00:22:25.739 },{ 00:22:25.739 "params": { 00:22:25.739 "name": "Nvme9", 00:22:25.739 "trtype": "tcp", 00:22:25.739 "traddr": "10.0.0.2", 00:22:25.739 "adrfam": "ipv4", 00:22:25.739 "trsvcid": "4420", 00:22:25.739 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:25.739 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:25.739 "hdgst": false, 00:22:25.739 "ddgst": false 00:22:25.739 }, 00:22:25.739 "method": "bdev_nvme_attach_controller" 00:22:25.739 },{ 00:22:25.739 "params": { 00:22:25.739 "name": "Nvme10", 00:22:25.739 "trtype": "tcp", 00:22:25.739 "traddr": "10.0.0.2", 00:22:25.739 "adrfam": "ipv4", 00:22:25.739 "trsvcid": "4420", 00:22:25.739 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:25.739 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:25.739 "hdgst": false, 00:22:25.739 "ddgst": false 00:22:25.739 }, 00:22:25.739 "method": "bdev_nvme_attach_controller" 00:22:25.739 }' 00:22:25.999 [2024-11-20 09:07:51.298819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.000 [2024-11-20 09:07:51.334764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.388 Running I/O for 10 seconds... 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:27.388 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:27.649 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:27.649 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:27.649 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:27.649 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:27.649 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.649 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.649 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.649 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:27.649 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:27.649 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 758730 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 758730 ']' 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 758730 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.910 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 758730 00:22:28.171 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.171 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.171 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 758730' 00:22:28.171 killing process with pid 758730 00:22:28.171 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 758730 00:22:28.171 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 758730 00:22:28.171 Received shutdown signal, test time was about 0.980980 seconds 00:22:28.171 00:22:28.171 Latency(us) 00:22:28.171 [2024-11-20T08:07:53.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.171 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:28.171 Verification LBA range: start 0x0 length 0x400 00:22:28.171 Nvme1n1 : 0.94 203.51 12.72 0.00 0.00 310549.33 15619.41 251658.24 00:22:28.171 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:28.171 Verification LBA range: start 0x0 length 0x400 00:22:28.171 Nvme2n1 : 0.95 203.05 12.69 0.00 0.00 305123.56 21736.11 248162.99 00:22:28.171 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:28.171 Verification LBA range: start 0x0 length 0x400 00:22:28.171 Nvme3n1 : 0.97 262.75 16.42 0.00 0.00 230912.64 19005.44 248162.99 00:22:28.171 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:28.171 Verification LBA range: start 0x0 length 0x400 00:22:28.171 Nvme4n1 : 0.96 266.83 16.68 0.00 0.00 222444.80 20862.29 249910.61 00:22:28.171 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:28.171 Verification LBA range: start 0x0 length 0x400 00:22:28.171 Nvme5n1 : 0.98 261.20 16.32 0.00 0.00 222785.49 17476.27 248162.99 00:22:28.171 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:28.171 Verification LBA range: start 0x0 length 0x400 00:22:28.171 Nvme6n1 : 0.97 264.34 16.52 0.00 0.00 214934.61 20753.07 248162.99 00:22:28.171 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:28.171 Verification LBA range: start 0x0 length 0x400 00:22:28.171 Nvme7n1 : 0.97 265.17 16.57 0.00 0.00 209508.48 12615.68 246415.36 00:22:28.171 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:28.171 Verification LBA range: start 0x0 length 0x400 00:22:28.171 Nvme8n1 : 0.97 263.55 16.47 0.00 0.00 206042.45 17257.81 225443.84 00:22:28.171 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:28.171 Verification LBA range: start 0x0 length 0x400 00:22:28.171 Nvme9n1 : 0.96 199.46 12.47 0.00 0.00 265507.27 18131.63 255153.49 00:22:28.171 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:28.171 Verification LBA range: start 0x0 length 0x400 00:22:28.171 Nvme10n1 : 0.95 201.14 12.57 0.00 0.00 256490.95 15400.96 263891.63 00:22:28.171 [2024-11-20T08:07:53.700Z] =================================================================================================================== 00:22:28.171 [2024-11-20T08:07:53.701Z] Total : 2390.98 149.44 0.00 0.00 239986.87 12615.68 263891.63 00:22:28.172 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 758489 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.557 rmmod nvme_tcp 00:22:29.557 rmmod nvme_fabrics 00:22:29.557 rmmod nvme_keyring 00:22:29.557 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 758489 ']' 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 758489 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 758489 ']' 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 758489 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 758489 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 758489' 00:22:29.558 killing process with pid 758489 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 758489 00:22:29.558 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 758489 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.558 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:32.104 00:22:32.104 real 0m7.768s 00:22:32.104 user 0m23.165s 00:22:32.104 sys 0m1.276s 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:32.104 ************************************ 00:22:32.104 END TEST nvmf_shutdown_tc2 00:22:32.104 ************************************ 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:32.104 ************************************ 00:22:32.104 START TEST nvmf_shutdown_tc3 00:22:32.104 ************************************ 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.104 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:32.105 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:32.105 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:32.105 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:32.105 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:22:32.105 00:22:32.105 --- 10.0.0.2 ping statistics --- 00:22:32.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.105 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:22:32.105 00:22:32.105 --- 10.0.0.1 ping statistics --- 00:22:32.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.105 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=760194 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 760194 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 760194 ']' 00:22:32.105 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.106 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.106 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.106 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.106 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.366 [2024-11-20 09:07:57.673017] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:22:32.366 [2024-11-20 09:07:57.673083] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.366 [2024-11-20 09:07:57.767705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.366 [2024-11-20 09:07:57.803186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.366 [2024-11-20 09:07:57.803217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.366 [2024-11-20 09:07:57.803223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.366 [2024-11-20 09:07:57.803228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.366 [2024-11-20 09:07:57.803232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.366 [2024-11-20 09:07:57.804791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.367 [2024-11-20 09:07:57.804947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.367 [2024-11-20 09:07:57.805093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.367 [2024-11-20 09:07:57.805094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.309 [2024-11-20 09:07:58.523660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.309 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.309 Malloc1 00:22:33.309 [2024-11-20 09:07:58.635565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.309 Malloc2 00:22:33.309 Malloc3 00:22:33.309 Malloc4 00:22:33.309 Malloc5 00:22:33.309 Malloc6 00:22:33.571 Malloc7 00:22:33.571 Malloc8 00:22:33.571 Malloc9 00:22:33.571 Malloc10 00:22:33.571 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.571 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:33.571 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.571 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=760573 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 760573 /var/tmp/bdevperf.sock 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 760573 ']' 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.571 { 00:22:33.571 "params": { 00:22:33.571 "name": "Nvme$subsystem", 00:22:33.571 "trtype": "$TEST_TRANSPORT", 00:22:33.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.571 "adrfam": "ipv4", 00:22:33.571 "trsvcid": "$NVMF_PORT", 00:22:33.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.571 "hdgst": ${hdgst:-false}, 00:22:33.571 "ddgst": ${ddgst:-false} 00:22:33.571 }, 00:22:33.571 "method": "bdev_nvme_attach_controller" 00:22:33.571 } 00:22:33.571 EOF 00:22:33.571 )") 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.571 { 00:22:33.571 "params": { 00:22:33.571 "name": "Nvme$subsystem", 00:22:33.571 "trtype": "$TEST_TRANSPORT", 00:22:33.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.571 "adrfam": "ipv4", 00:22:33.571 "trsvcid": "$NVMF_PORT", 00:22:33.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.571 "hdgst": ${hdgst:-false}, 00:22:33.571 "ddgst": ${ddgst:-false} 00:22:33.571 }, 00:22:33.571 "method": "bdev_nvme_attach_controller" 00:22:33.571 } 00:22:33.571 EOF 00:22:33.571 )") 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.571 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.571 { 00:22:33.571 "params": { 00:22:33.571 "name": "Nvme$subsystem", 00:22:33.571 "trtype": "$TEST_TRANSPORT", 00:22:33.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.571 "adrfam": "ipv4", 00:22:33.572 "trsvcid": "$NVMF_PORT", 00:22:33.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.572 "hdgst": ${hdgst:-false}, 00:22:33.572 "ddgst": ${ddgst:-false} 00:22:33.572 }, 00:22:33.572 "method": "bdev_nvme_attach_controller" 00:22:33.572 } 00:22:33.572 EOF 00:22:33.572 )") 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.572 { 00:22:33.572 "params": { 00:22:33.572 "name": "Nvme$subsystem", 00:22:33.572 "trtype": "$TEST_TRANSPORT", 00:22:33.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.572 "adrfam": "ipv4", 00:22:33.572 "trsvcid": "$NVMF_PORT", 00:22:33.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.572 "hdgst": ${hdgst:-false}, 00:22:33.572 "ddgst": ${ddgst:-false} 00:22:33.572 }, 00:22:33.572 "method": "bdev_nvme_attach_controller" 00:22:33.572 } 00:22:33.572 EOF 00:22:33.572 )") 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.572 { 00:22:33.572 "params": { 00:22:33.572 "name": "Nvme$subsystem", 00:22:33.572 "trtype": "$TEST_TRANSPORT", 00:22:33.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.572 "adrfam": "ipv4", 00:22:33.572 "trsvcid": "$NVMF_PORT", 00:22:33.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.572 "hdgst": ${hdgst:-false}, 00:22:33.572 "ddgst": ${ddgst:-false} 00:22:33.572 }, 00:22:33.572 "method": "bdev_nvme_attach_controller" 00:22:33.572 } 00:22:33.572 EOF 00:22:33.572 )") 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.572 { 00:22:33.572 "params": { 00:22:33.572 "name": "Nvme$subsystem", 00:22:33.572 "trtype": "$TEST_TRANSPORT", 00:22:33.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.572 "adrfam": "ipv4", 00:22:33.572 "trsvcid": "$NVMF_PORT", 00:22:33.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.572 "hdgst": ${hdgst:-false}, 00:22:33.572 "ddgst": ${ddgst:-false} 00:22:33.572 }, 00:22:33.572 "method": "bdev_nvme_attach_controller" 00:22:33.572 } 00:22:33.572 EOF 00:22:33.572 )") 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:33.572 [2024-11-20 09:07:59.080230] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:22:33.572 [2024-11-20 09:07:59.080284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760573 ] 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.572 { 00:22:33.572 "params": { 00:22:33.572 "name": "Nvme$subsystem", 00:22:33.572 "trtype": "$TEST_TRANSPORT", 00:22:33.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.572 "adrfam": "ipv4", 00:22:33.572 "trsvcid": "$NVMF_PORT", 00:22:33.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.572 "hdgst": ${hdgst:-false}, 00:22:33.572 "ddgst": ${ddgst:-false} 00:22:33.572 }, 00:22:33.572 "method": "bdev_nvme_attach_controller" 00:22:33.572 } 00:22:33.572 EOF 00:22:33.572 )") 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.572 { 00:22:33.572 "params": { 00:22:33.572 "name": "Nvme$subsystem", 00:22:33.572 "trtype": "$TEST_TRANSPORT", 00:22:33.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.572 "adrfam": "ipv4", 00:22:33.572 "trsvcid": "$NVMF_PORT", 00:22:33.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.572 "hdgst": ${hdgst:-false}, 00:22:33.572 "ddgst": ${ddgst:-false} 00:22:33.572 }, 00:22:33.572 "method": "bdev_nvme_attach_controller" 00:22:33.572 } 00:22:33.572 EOF 00:22:33.572 )") 00:22:33.572 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:33.833 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.833 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.833 { 00:22:33.833 "params": { 00:22:33.833 "name": "Nvme$subsystem", 00:22:33.833 "trtype": "$TEST_TRANSPORT", 00:22:33.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.833 "adrfam": "ipv4", 00:22:33.833 "trsvcid": "$NVMF_PORT", 00:22:33.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.833 "hdgst": ${hdgst:-false}, 00:22:33.833 "ddgst": ${ddgst:-false} 00:22:33.833 }, 00:22:33.833 "method": "bdev_nvme_attach_controller" 00:22:33.833 } 00:22:33.833 EOF 00:22:33.833 )") 00:22:33.833 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:33.833 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.833 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.833 { 00:22:33.833 "params": { 00:22:33.833 "name": "Nvme$subsystem", 00:22:33.833 "trtype": "$TEST_TRANSPORT", 00:22:33.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.833 "adrfam": "ipv4", 00:22:33.833 "trsvcid": "$NVMF_PORT", 00:22:33.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.833 "hdgst": ${hdgst:-false}, 00:22:33.833 "ddgst": ${ddgst:-false} 00:22:33.833 }, 00:22:33.833 "method": "bdev_nvme_attach_controller" 00:22:33.833 } 00:22:33.833 EOF 00:22:33.833 )") 00:22:33.833 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:33.833 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:33.833 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:33.833 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:33.833 "params": { 00:22:33.833 "name": "Nvme1", 00:22:33.833 "trtype": "tcp", 00:22:33.833 "traddr": "10.0.0.2", 00:22:33.833 "adrfam": "ipv4", 00:22:33.833 "trsvcid": "4420", 00:22:33.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.834 "hdgst": false, 00:22:33.834 "ddgst": false 00:22:33.834 }, 00:22:33.834 "method": "bdev_nvme_attach_controller" 00:22:33.834 },{ 00:22:33.834 "params": { 00:22:33.834 "name": "Nvme2", 00:22:33.834 "trtype": "tcp", 00:22:33.834 "traddr": "10.0.0.2", 00:22:33.834 "adrfam": "ipv4", 00:22:33.834 "trsvcid": "4420", 00:22:33.834 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:33.834 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:33.834 "hdgst": false, 00:22:33.834 "ddgst": false 00:22:33.834 }, 00:22:33.834 "method": "bdev_nvme_attach_controller" 00:22:33.834 },{ 00:22:33.834 "params": { 00:22:33.834 "name": "Nvme3", 00:22:33.834 "trtype": "tcp", 00:22:33.834 "traddr": "10.0.0.2", 00:22:33.834 "adrfam": "ipv4", 00:22:33.834 "trsvcid": "4420", 00:22:33.834 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:33.834 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:33.834 "hdgst": false, 00:22:33.834 "ddgst": false 00:22:33.834 }, 00:22:33.834 "method": "bdev_nvme_attach_controller" 00:22:33.834 },{ 00:22:33.834 "params": { 00:22:33.834 "name": "Nvme4", 00:22:33.834 "trtype": "tcp", 00:22:33.834 "traddr": "10.0.0.2", 00:22:33.834 "adrfam": "ipv4", 00:22:33.834 "trsvcid": "4420", 00:22:33.834 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:33.834 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:33.834 "hdgst": false, 00:22:33.834 "ddgst": false 00:22:33.834 }, 00:22:33.834 "method": "bdev_nvme_attach_controller" 00:22:33.834 },{ 00:22:33.834 "params": { 00:22:33.834 "name": "Nvme5", 00:22:33.834 "trtype": "tcp", 00:22:33.834 "traddr": "10.0.0.2", 00:22:33.834 "adrfam": "ipv4", 00:22:33.834 "trsvcid": "4420", 00:22:33.834 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:33.834 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:33.834 "hdgst": false, 00:22:33.834 "ddgst": false 00:22:33.834 }, 00:22:33.834 "method": "bdev_nvme_attach_controller" 00:22:33.834 },{ 00:22:33.834 "params": { 00:22:33.834 "name": "Nvme6", 00:22:33.834 "trtype": "tcp", 00:22:33.834 "traddr": "10.0.0.2", 00:22:33.834 "adrfam": "ipv4", 00:22:33.834 "trsvcid": "4420", 00:22:33.834 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:33.834 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:33.834 "hdgst": false, 00:22:33.834 "ddgst": false 00:22:33.834 }, 00:22:33.834 "method": "bdev_nvme_attach_controller" 00:22:33.834 },{ 00:22:33.834 "params": { 00:22:33.834 "name": "Nvme7", 00:22:33.834 "trtype": "tcp", 00:22:33.834 "traddr": "10.0.0.2", 00:22:33.834 "adrfam": "ipv4", 00:22:33.834 "trsvcid": "4420", 00:22:33.834 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:33.834 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:33.834 "hdgst": false, 00:22:33.834 "ddgst": false 00:22:33.834 }, 00:22:33.834 "method": "bdev_nvme_attach_controller" 00:22:33.834 },{ 00:22:33.834 "params": { 00:22:33.834 "name": "Nvme8", 00:22:33.834 "trtype": "tcp", 00:22:33.834 "traddr": "10.0.0.2", 00:22:33.834 "adrfam": "ipv4", 00:22:33.834 "trsvcid": "4420", 00:22:33.834 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:33.834 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:33.834 "hdgst": false, 00:22:33.834 "ddgst": false 00:22:33.834 }, 00:22:33.834 "method": "bdev_nvme_attach_controller" 00:22:33.834 },{ 00:22:33.834 "params": { 00:22:33.834 "name": "Nvme9", 00:22:33.834 "trtype": "tcp", 00:22:33.834 "traddr": "10.0.0.2", 00:22:33.834 "adrfam": "ipv4", 00:22:33.834 "trsvcid": "4420", 00:22:33.834 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:33.834 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:33.834 "hdgst": false, 00:22:33.834 "ddgst": false 00:22:33.834 }, 00:22:33.834 "method": "bdev_nvme_attach_controller" 00:22:33.834 },{ 00:22:33.834 "params": { 00:22:33.834 "name": "Nvme10", 00:22:33.834 "trtype": "tcp", 00:22:33.834 "traddr": "10.0.0.2", 00:22:33.834 "adrfam": "ipv4", 00:22:33.834 "trsvcid": "4420", 00:22:33.834 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:33.834 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:33.834 "hdgst": false, 00:22:33.834 "ddgst": false 00:22:33.834 }, 00:22:33.834 "method": "bdev_nvme_attach_controller" 00:22:33.834 }' 00:22:33.834 [2024-11-20 09:07:59.170399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.834 [2024-11-20 09:07:59.206537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.220 Running I/O for 10 seconds... 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.220 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:35.221 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:35.221 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:35.481 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:35.481 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:35.481 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:35.481 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:35.481 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.481 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.481 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.481 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=72 00:22:35.481 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 72 -ge 100 ']' 00:22:35.481 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=141 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 141 -ge 100 ']' 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 760194 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 760194 ']' 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 760194 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.742 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 760194 00:22:36.024 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:36.024 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:36.024 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 760194' 00:22:36.024 killing process with pid 760194 00:22:36.024 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 760194 00:22:36.024 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 760194 00:22:36.024 [2024-11-20 09:08:01.285796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.285998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.286146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219c110 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.287180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.287208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.287214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.287219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.287224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.024 [2024-11-20 09:08:01.287229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.287508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.025 [2024-11-20 09:08:01.289565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.289743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cad0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.026 [2024-11-20 09:08:01.290762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.290847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cfc0 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.291618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219d490 is same with the state(6) to be set 00:22:36.027 [2024-11-20 09:08:01.292666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.292977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219de30 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.293719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.293735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.293741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.293746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.028 [2024-11-20 09:08:01.293751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.293959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.301658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0180 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.301789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x579fa0 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.301887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x582810 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.301978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.301987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.301995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.302002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.302011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.302021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.302029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.029 [2024-11-20 09:08:01.302036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.029 [2024-11-20 09:08:01.302043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x581420 is same with the state(6) to be set 00:22:36.029 [2024-11-20 09:08:01.302066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6f20 is same with the state(6) to be set 00:22:36.030 [2024-11-20 09:08:01.302152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x49c610 is same with the state(6) to be set 00:22:36.030 [2024-11-20 09:08:01.302247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x584cb0 is same with the state(6) to be set 00:22:36.030 [2024-11-20 09:08:01.302334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57b9f0 is same with the state(6) to be set 00:22:36.030 [2024-11-20 09:08:01.302420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.030 [2024-11-20 09:08:01.302475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8d00 is same with the state(6) to be set 00:22:36.030 [2024-11-20 09:08:01.302564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.030 [2024-11-20 09:08:01.302886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.030 [2024-11-20 09:08:01.302895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.302902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.302911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.302919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.302928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.302935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.302935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with [2024-11-20 09:08:01.302945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1the state(6) to be set 00:22:36.031 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.302955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.302957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.302964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.302965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.302970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.302973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.302977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.302983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.302983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.302988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.302991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.302997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.303002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.303002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.303011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.303019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.303024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.303024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219e7f0 is same with the state(6) to be set 00:22:36.031 [2024-11-20 09:08:01.303032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.031 [2024-11-20 09:08:01.303448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.031 [2024-11-20 09:08:01.303455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.303988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.303997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.032 [2024-11-20 09:08:01.304362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.032 [2024-11-20 09:08:01.304369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.304379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.304386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.304395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.304405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.304414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.304421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.304431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.304438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.304447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.304454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.304464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.304472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.304482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.304489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.304498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.304505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.304515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.304522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.304531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.304538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.312984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.312992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.313003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.313010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.313019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.313027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.033 [2024-11-20 09:08:01.313036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.033 [2024-11-20 09:08:01.313044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.313054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.313061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.313070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.313078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.313087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.313095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.313104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.313112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.313121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.313128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.313462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b0180 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.313495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x579fa0 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.313526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.034 [2024-11-20 09:08:01.313536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.313545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.034 [2024-11-20 09:08:01.313552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.313560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.034 [2024-11-20 09:08:01.313568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.313576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.034 [2024-11-20 09:08:01.313586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.313594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb310 is same with the state(6) to be set 00:22:36.034 [2024-11-20 09:08:01.313613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x582810 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.313629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x581420 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.313646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d6f20 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.313661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x49c610 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.313673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x584cb0 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.313688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57b9f0 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.313702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f8d00 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.316441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:36.034 [2024-11-20 09:08:01.317418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:36.034 [2024-11-20 09:08:01.317843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.034 [2024-11-20 09:08:01.317862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57b9f0 with addr=10.0.0.2, port=4420 00:22:36.034 [2024-11-20 09:08:01.317871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57b9f0 is same with the state(6) to be set 00:22:36.034 [2024-11-20 09:08:01.318248] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.034 [2024-11-20 09:08:01.318293] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.034 [2024-11-20 09:08:01.318329] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.034 [2024-11-20 09:08:01.318628] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.034 [2024-11-20 09:08:01.318746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.318759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.318775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.318783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.318792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.318800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.318810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.318817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.318827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.318834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.318846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a4f0 is same with the state(6) to be set 00:22:36.034 [2024-11-20 09:08:01.318968] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.034 [2024-11-20 09:08:01.319457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.034 [2024-11-20 09:08:01.319497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x49c610 with addr=10.0.0.2, port=4420 00:22:36.034 [2024-11-20 09:08:01.319509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x49c610 is same with the state(6) to be set 00:22:36.034 [2024-11-20 09:08:01.319528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57b9f0 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.319594] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.034 [2024-11-20 09:08:01.320634] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.034 [2024-11-20 09:08:01.320667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:36.034 [2024-11-20 09:08:01.320696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x49c610 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.320709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:36.034 [2024-11-20 09:08:01.320718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:36.034 [2024-11-20 09:08:01.320728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:36.034 [2024-11-20 09:08:01.320739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:36.034 [2024-11-20 09:08:01.321138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.034 [2024-11-20 09:08:01.321154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x579fa0 with addr=10.0.0.2, port=4420 00:22:36.034 [2024-11-20 09:08:01.321167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x579fa0 is same with the state(6) to be set 00:22:36.034 [2024-11-20 09:08:01.321176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:36.034 [2024-11-20 09:08:01.321182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:36.034 [2024-11-20 09:08:01.321190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:36.034 [2024-11-20 09:08:01.321197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:36.034 [2024-11-20 09:08:01.321504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x579fa0 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.321551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:36.034 [2024-11-20 09:08:01.321559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:36.034 [2024-11-20 09:08:01.321566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:36.034 [2024-11-20 09:08:01.321572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:36.034 [2024-11-20 09:08:01.323466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb310 (9): Bad file descriptor 00:22:36.034 [2024-11-20 09:08:01.323608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.323621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.323641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.323650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.323661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.323669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.323679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.323686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.034 [2024-11-20 09:08:01.323696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.034 [2024-11-20 09:08:01.323703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.323983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.323992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.035 [2024-11-20 09:08:01.324388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.035 [2024-11-20 09:08:01.324398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.324736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.324745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa727d0 is same with the state(6) to be set 00:22:36.036 [2024-11-20 09:08:01.326022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.036 [2024-11-20 09:08:01.326366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.036 [2024-11-20 09:08:01.326374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.037 [2024-11-20 09:08:01.326968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.037 [2024-11-20 09:08:01.326975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.326984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.326992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.327001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.327009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.327019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.327026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.327036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.327043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.327052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.327059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.327069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.327077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.327086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.327093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.327103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.327110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.327122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.327129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.327139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.327146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.327154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x789d60 is same with the state(6) to be set 00:22:36.038 [2024-11-20 09:08:01.328432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.038 [2024-11-20 09:08:01.328916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.038 [2024-11-20 09:08:01.328926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.328934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.328943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.328950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.328960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.328967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.328976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.328984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.328994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.329541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.329549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x984fa0 is same with the state(6) to be set 00:22:36.039 [2024-11-20 09:08:01.330821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.330836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.330848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.330858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.039 [2024-11-20 09:08:01.330870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.039 [2024-11-20 09:08:01.330879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.330890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.330899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.330911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.330919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.330930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.330937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.330946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.330954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.330963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.330971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.330984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.330991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.040 [2024-11-20 09:08:01.331565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.040 [2024-11-20 09:08:01.331574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.331950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.331958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9864d0 is same with the state(6) to be set 00:22:36.041 [2024-11-20 09:08:01.333233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.041 [2024-11-20 09:08:01.333515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.041 [2024-11-20 09:08:01.333523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.333986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.333996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.334004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.334013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.334020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.334031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.334039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.334048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.334056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.334066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.334073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.334082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.334089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.334099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.042 [2024-11-20 09:08:01.334107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.042 [2024-11-20 09:08:01.334117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.334336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.334345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988fc0 is same with the state(6) to be set 00:22:36.043 [2024-11-20 09:08:01.335623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.335984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.335994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.336001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.336012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.336019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.336029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.336036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.336046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.336053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.336062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.336070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.043 [2024-11-20 09:08:01.336080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.043 [2024-11-20 09:08:01.336088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.044 [2024-11-20 09:08:01.336743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.044 [2024-11-20 09:08:01.336752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c5320 is same with the state(6) to be set 00:22:36.044 [2024-11-20 09:08:01.338054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:36.045 [2024-11-20 09:08:01.338069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:36.045 [2024-11-20 09:08:01.338079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:36.045 [2024-11-20 09:08:01.338089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:36.045 [2024-11-20 09:08:01.338182] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:36.045 [2024-11-20 09:08:01.338199] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:36.045 [2024-11-20 09:08:01.338276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:36.045 [2024-11-20 09:08:01.338287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:36.045 [2024-11-20 09:08:01.338556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.045 [2024-11-20 09:08:01.338571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x584cb0 with addr=10.0.0.2, port=4420 00:22:36.045 [2024-11-20 09:08:01.338579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x584cb0 is same with the state(6) to be set 00:22:36.045 [2024-11-20 09:08:01.338857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.045 [2024-11-20 09:08:01.338868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x582810 with addr=10.0.0.2, port=4420 00:22:36.045 [2024-11-20 09:08:01.338876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x582810 is same with the state(6) to be set 00:22:36.045 [2024-11-20 09:08:01.339413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.045 [2024-11-20 09:08:01.339454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x581420 with addr=10.0.0.2, port=4420 00:22:36.045 [2024-11-20 09:08:01.339465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x581420 is same with the state(6) to be set 00:22:36.045 [2024-11-20 09:08:01.339798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.045 [2024-11-20 09:08:01.339812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0180 with addr=10.0.0.2, port=4420 00:22:36.045 [2024-11-20 09:08:01.339820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0180 is same with the state(6) to be set 00:22:36.045 [2024-11-20 09:08:01.341188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.045 [2024-11-20 09:08:01.341726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.045 [2024-11-20 09:08:01.341734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.341986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.341994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.046 [2024-11-20 09:08:01.342315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.046 [2024-11-20 09:08:01.342323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ba20 is same with the state(6) to be set 00:22:36.046 [2024-11-20 09:08:01.344128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:36.046 [2024-11-20 09:08:01.344153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:36.047 [2024-11-20 09:08:01.344169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:36.047 task offset: 24576 on job bdev=Nvme2n1 fails 00:22:36.047 00:22:36.047 Latency(us) 00:22:36.047 [2024-11-20T08:08:01.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.047 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.047 Job: Nvme1n1 ended in about 0.97 seconds with error 00:22:36.047 Verification LBA range: start 0x0 length 0x400 00:22:36.047 Nvme1n1 : 0.97 204.04 12.75 66.29 0.00 234069.19 21299.20 221074.77 00:22:36.047 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.047 Job: Nvme2n1 ended in about 0.95 seconds with error 00:22:36.047 Verification LBA range: start 0x0 length 0x400 00:22:36.047 Nvme2n1 : 0.95 201.15 12.57 67.05 0.00 231089.81 12943.36 232434.35 00:22:36.047 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.047 Job: Nvme3n1 ended in about 0.97 seconds with error 00:22:36.047 Verification LBA range: start 0x0 length 0x400 00:22:36.047 Nvme3n1 : 0.97 202.50 12.66 66.12 0.00 226013.50 16711.68 249910.61 00:22:36.047 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.047 Job: Nvme4n1 ended in about 0.97 seconds with error 00:22:36.047 Verification LBA range: start 0x0 length 0x400 00:22:36.047 Nvme4n1 : 0.97 197.88 12.37 65.96 0.00 225264.85 18022.40 232434.35 00:22:36.047 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.047 Job: Nvme5n1 ended in about 0.97 seconds with error 00:22:36.047 Verification LBA range: start 0x0 length 0x400 00:22:36.047 Nvme5n1 : 0.97 131.59 8.22 65.80 0.00 294891.52 15837.87 253405.87 00:22:36.047 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.047 Job: Nvme6n1 ended in about 0.96 seconds with error 00:22:36.047 Verification LBA range: start 0x0 length 0x400 00:22:36.047 Nvme6n1 : 0.96 200.87 12.55 66.96 0.00 212040.00 13817.17 258648.75 00:22:36.047 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.047 Job: Nvme7n1 ended in about 0.98 seconds with error 00:22:36.047 Verification LBA range: start 0x0 length 0x400 00:22:36.047 Nvme7n1 : 0.98 196.90 12.31 65.63 0.00 212030.61 9666.56 260396.37 00:22:36.047 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.047 Job: Nvme8n1 ended in about 0.96 seconds with error 00:22:36.047 Verification LBA range: start 0x0 length 0x400 00:22:36.047 Nvme8n1 : 0.96 199.97 12.50 5.21 0.00 263761.33 37573.97 262144.00 00:22:36.047 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.047 Job: Nvme9n1 ended in about 0.98 seconds with error 00:22:36.047 Verification LBA range: start 0x0 length 0x400 00:22:36.047 Nvme9n1 : 0.98 130.21 8.14 65.10 0.00 272680.96 15619.41 269134.51 00:22:36.047 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.047 Job: Nvme10n1 ended in about 0.98 seconds with error 00:22:36.047 Verification LBA range: start 0x0 length 0x400 00:22:36.047 Nvme10n1 : 0.98 130.95 8.18 65.47 0.00 264442.31 26105.17 249910.61 00:22:36.047 [2024-11-20T08:08:01.576Z] =================================================================================================================== 00:22:36.047 [2024-11-20T08:08:01.576Z] Total : 1796.04 112.25 599.58 0.00 240272.76 9666.56 269134.51 00:22:36.047 [2024-11-20 09:08:01.368677] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:36.047 [2024-11-20 09:08:01.368707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:36.047 [2024-11-20 09:08:01.369090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.047 [2024-11-20 09:08:01.369108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d6f20 with addr=10.0.0.2, port=4420 00:22:36.047 [2024-11-20 09:08:01.369117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6f20 is same with the state(6) to be set 00:22:36.047 [2024-11-20 09:08:01.369315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.047 [2024-11-20 09:08:01.369325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f8d00 with addr=10.0.0.2, port=4420 00:22:36.047 [2024-11-20 09:08:01.369333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8d00 is same with the state(6) to be set 00:22:36.047 [2024-11-20 09:08:01.369346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x584cb0 (9): Bad file descriptor 00:22:36.047 [2024-11-20 09:08:01.369357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x582810 (9): Bad file descriptor 00:22:36.047 [2024-11-20 09:08:01.369368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x581420 (9): Bad file descriptor 00:22:36.047 [2024-11-20 09:08:01.369378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b0180 (9): Bad file descriptor 00:22:36.047 [2024-11-20 09:08:01.369860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.047 [2024-11-20 09:08:01.369874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57b9f0 with addr=10.0.0.2, port=4420 00:22:36.047 [2024-11-20 09:08:01.369886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57b9f0 is same with the state(6) to be set 00:22:36.047 [2024-11-20 09:08:01.370182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.047 [2024-11-20 09:08:01.370193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x49c610 with addr=10.0.0.2, port=4420 00:22:36.047 [2024-11-20 09:08:01.370200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x49c610 is same with the state(6) to be set 00:22:36.047 [2024-11-20 09:08:01.370468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.047 [2024-11-20 09:08:01.370478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x579fa0 with addr=10.0.0.2, port=4420 00:22:36.047 [2024-11-20 09:08:01.370485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x579fa0 is same with the state(6) to be set 00:22:36.047 [2024-11-20 09:08:01.370828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.047 [2024-11-20 09:08:01.370837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cb310 with addr=10.0.0.2, port=4420 00:22:36.047 [2024-11-20 09:08:01.370845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb310 is same with the state(6) to be set 00:22:36.047 [2024-11-20 09:08:01.370854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d6f20 (9): Bad file descriptor 00:22:36.047 [2024-11-20 09:08:01.370864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f8d00 (9): Bad file descriptor 00:22:36.047 [2024-11-20 09:08:01.370874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:36.047 [2024-11-20 09:08:01.370881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:36.047 [2024-11-20 09:08:01.370890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:36.047 [2024-11-20 09:08:01.370900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:36.047 [2024-11-20 09:08:01.370909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:36.047 [2024-11-20 09:08:01.370915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:36.047 [2024-11-20 09:08:01.370923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:36.047 [2024-11-20 09:08:01.370929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:36.047 [2024-11-20 09:08:01.370937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:36.047 [2024-11-20 09:08:01.370943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:36.047 [2024-11-20 09:08:01.370951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:36.047 [2024-11-20 09:08:01.370957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:36.047 [2024-11-20 09:08:01.370965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:36.047 [2024-11-20 09:08:01.370971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:36.047 [2024-11-20 09:08:01.370978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:36.047 [2024-11-20 09:08:01.370984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:36.047 [2024-11-20 09:08:01.371034] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:36.047 [2024-11-20 09:08:01.371050] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:36.047 [2024-11-20 09:08:01.371430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57b9f0 (9): Bad file descriptor 00:22:36.047 [2024-11-20 09:08:01.371444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x49c610 (9): Bad file descriptor 00:22:36.047 [2024-11-20 09:08:01.371454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x579fa0 (9): Bad file descriptor 00:22:36.047 [2024-11-20 09:08:01.371463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb310 (9): Bad file descriptor 00:22:36.047 [2024-11-20 09:08:01.371471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:36.047 [2024-11-20 09:08:01.371478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:36.047 [2024-11-20 09:08:01.371486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:36.047 [2024-11-20 09:08:01.371493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:36.047 [2024-11-20 09:08:01.371500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:36.047 [2024-11-20 09:08:01.371507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:36.047 [2024-11-20 09:08:01.371514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:36.047 [2024-11-20 09:08:01.371521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:36.047 [2024-11-20 09:08:01.371779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:36.048 [2024-11-20 09:08:01.371793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:36.048 [2024-11-20 09:08:01.371803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:36.048 [2024-11-20 09:08:01.371812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:36.048 [2024-11-20 09:08:01.371845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:36.048 [2024-11-20 09:08:01.371852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:36.048 [2024-11-20 09:08:01.371860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:36.048 [2024-11-20 09:08:01.371866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:36.048 [2024-11-20 09:08:01.371875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:36.048 [2024-11-20 09:08:01.371882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:36.048 [2024-11-20 09:08:01.371889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:36.048 [2024-11-20 09:08:01.371896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:36.048 [2024-11-20 09:08:01.371903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:36.048 [2024-11-20 09:08:01.371910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:36.048 [2024-11-20 09:08:01.371917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:36.048 [2024-11-20 09:08:01.371924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:36.048 [2024-11-20 09:08:01.371934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:36.048 [2024-11-20 09:08:01.371941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:36.048 [2024-11-20 09:08:01.371948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:36.048 [2024-11-20 09:08:01.371954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:36.048 [2024-11-20 09:08:01.372183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.048 [2024-11-20 09:08:01.372199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0180 with addr=10.0.0.2, port=4420 00:22:36.048 [2024-11-20 09:08:01.372207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0180 is same with the state(6) to be set 00:22:36.048 [2024-11-20 09:08:01.372432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.048 [2024-11-20 09:08:01.372443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x581420 with addr=10.0.0.2, port=4420 00:22:36.048 [2024-11-20 09:08:01.372450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x581420 is same with the state(6) to be set 00:22:36.048 [2024-11-20 09:08:01.372628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.048 [2024-11-20 09:08:01.372638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x582810 with addr=10.0.0.2, port=4420 00:22:36.048 [2024-11-20 09:08:01.372646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x582810 is same with the state(6) to be set 00:22:36.048 [2024-11-20 09:08:01.372937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.048 [2024-11-20 09:08:01.372947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x584cb0 with addr=10.0.0.2, port=4420 00:22:36.048 [2024-11-20 09:08:01.372955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x584cb0 is same with the state(6) to be set 00:22:36.048 [2024-11-20 09:08:01.372988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b0180 (9): Bad file descriptor 00:22:36.048 [2024-11-20 09:08:01.372999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x581420 (9): Bad file descriptor 00:22:36.048 [2024-11-20 09:08:01.373008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x582810 (9): Bad file descriptor 00:22:36.048 [2024-11-20 09:08:01.373017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x584cb0 (9): Bad file descriptor 00:22:36.048 [2024-11-20 09:08:01.373044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:36.048 [2024-11-20 09:08:01.373051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:36.048 [2024-11-20 09:08:01.373059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:36.048 [2024-11-20 09:08:01.373066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:36.048 [2024-11-20 09:08:01.373073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:36.048 [2024-11-20 09:08:01.373079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:36.048 [2024-11-20 09:08:01.373086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:36.048 [2024-11-20 09:08:01.373092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:36.048 [2024-11-20 09:08:01.373100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:36.048 [2024-11-20 09:08:01.373109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:36.048 [2024-11-20 09:08:01.373116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:36.048 [2024-11-20 09:08:01.373123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:36.048 [2024-11-20 09:08:01.373130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:36.048 [2024-11-20 09:08:01.373136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:36.048 [2024-11-20 09:08:01.373143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:36.048 [2024-11-20 09:08:01.373149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:36.309 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 760573 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 760573 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 760573 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.251 rmmod nvme_tcp 00:22:37.251 rmmod nvme_fabrics 00:22:37.251 rmmod nvme_keyring 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 760194 ']' 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 760194 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 760194 ']' 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 760194 00:22:37.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (760194) - No such process 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 760194 is not found' 00:22:37.251 Process with pid 760194 is not found 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.251 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.797 00:22:39.797 real 0m7.480s 00:22:39.797 user 0m17.582s 00:22:39.797 sys 0m1.240s 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:39.797 ************************************ 00:22:39.797 END TEST nvmf_shutdown_tc3 00:22:39.797 ************************************ 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:39.797 ************************************ 00:22:39.797 START TEST nvmf_shutdown_tc4 00:22:39.797 ************************************ 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.797 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:39.798 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:39.798 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:39.798 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:39.798 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.798 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:22:39.798 00:22:39.798 --- 10.0.0.2 ping statistics --- 00:22:39.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.798 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:22:39.798 00:22:39.798 --- 10.0.0.1 ping statistics --- 00:22:39.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.798 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.798 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=761711 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 761711 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 761711 ']' 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.799 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.799 [2024-11-20 09:08:05.224837] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:22:39.799 [2024-11-20 09:08:05.224903] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.799 [2024-11-20 09:08:05.321731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.059 [2024-11-20 09:08:05.356541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.059 [2024-11-20 09:08:05.356572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.059 [2024-11-20 09:08:05.356578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.059 [2024-11-20 09:08:05.356584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.059 [2024-11-20 09:08:05.356588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.059 [2024-11-20 09:08:05.357939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.059 [2024-11-20 09:08:05.358092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.059 [2024-11-20 09:08:05.358224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.059 [2024-11-20 09:08:05.358226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 [2024-11-20 09:08:06.069442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.629 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:40.890 Malloc1 00:22:40.890 [2024-11-20 09:08:06.177745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.890 Malloc2 00:22:40.890 Malloc3 00:22:40.890 Malloc4 00:22:40.890 Malloc5 00:22:40.890 Malloc6 00:22:40.890 Malloc7 00:22:41.150 Malloc8 00:22:41.150 Malloc9 00:22:41.150 Malloc10 00:22:41.150 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.150 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:41.150 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.150 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:41.150 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=762090 00:22:41.150 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:41.150 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:41.150 [2024-11-20 09:08:06.659927] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 761711 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 761711 ']' 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 761711 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 761711 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 761711' 00:22:46.439 killing process with pid 761711 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 761711 00:22:46.439 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 761711 00:22:46.439 [2024-11-20 09:08:11.658263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb650 is same with the state(6) to be set 00:22:46.439 [2024-11-20 09:08:11.658310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb650 is same with the state(6) to be set 00:22:46.439 [2024-11-20 09:08:11.658316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb650 is same with the state(6) to be set 00:22:46.439 [2024-11-20 09:08:11.658321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb650 is same with the state(6) to be set 00:22:46.439 [2024-11-20 09:08:11.658815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ebff0 is same with the state(6) to be set 00:22:46.439 [2024-11-20 09:08:11.658843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ebff0 is same with the state(6) to be set 00:22:46.439 [2024-11-20 09:08:11.658849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ebff0 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.658855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ebff0 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.658860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ebff0 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.658865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ebff0 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.658870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ebff0 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.658875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ebff0 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.659290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb180 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.659315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb180 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.659321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb180 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.659326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb180 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.659331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb180 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.659336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb180 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.659344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb180 is same with the state(6) to be set 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 [2024-11-20 09:08:11.663799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 [2024-11-20 09:08:11.664477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef010 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.664495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef010 is same with the state(6) to be set 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 [2024-11-20 09:08:11.664500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef010 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.664506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef010 is same with the state(6) to be set 00:22:46.440 starting I/O failed: -6 00:22:46.440 [2024-11-20 09:08:11.664511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef010 is same with the state(6) to be set 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 [2024-11-20 09:08:11.664517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef010 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.664522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef010 is same with the state(6) to be set 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 [2024-11-20 09:08:11.664737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef4e0 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.664756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef4e0 is same with the state(6) to be set 00:22:46.440 [2024-11-20 09:08:11.664755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 starting I/O failed: -6 00:22:46.440 Write completed with error (sct=0, sc=8) 00:22:46.440 [2024-11-20 09:08:11.664915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef9b0 is same with the state(6) to be set 00:22:46.440 starting I/O failed: -6 00:22:46.441 [2024-11-20 09:08:11.664932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef9b0 is same with the state(6) to be set 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 [2024-11-20 09:08:11.664938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef9b0 is same with the state(6) to be set 00:22:46.441 [2024-11-20 09:08:11.664944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef9b0 is same with the state(6) to be set 00:22:46.441 starting I/O failed: -6 00:22:46.441 [2024-11-20 09:08:11.664948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ef9b0 is same with the state(6) to be set 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 [2024-11-20 09:08:11.665314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eeb40 is same with the state(6) to be set 00:22:46.441 starting I/O failed: -6 00:22:46.441 [2024-11-20 09:08:11.665330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eeb40 is same with the state(6) to be set 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 [2024-11-20 09:08:11.665335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eeb40 is same with the state(6) to be set 00:22:46.441 [2024-11-20 09:08:11.665341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eeb40 is same with the state(6) to be set 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 [2024-11-20 09:08:11.665652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 [2024-11-20 09:08:11.666419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed800 is same with the state(6) to be set 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 [2024-11-20 09:08:11.666434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed800 is same with the state(6) to be set 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 [2024-11-20 09:08:11.666442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed800 is same with the state(6) to be set 00:22:46.441 [2024-11-20 09:08:11.666447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed800 is same with the state(6) to be set 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.441 Write completed with error (sct=0, sc=8) 00:22:46.441 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 [2024-11-20 09:08:11.667126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:46.442 NVMe io qpair process completion error 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 [2024-11-20 09:08:11.668114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 [2024-11-20 09:08:11.668924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.442 starting I/O failed: -6 00:22:46.442 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 [2024-11-20 09:08:11.669853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 [2024-11-20 09:08:11.671334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:46.443 NVMe io qpair process completion error 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 [2024-11-20 09:08:11.672677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:46.443 Write completed with error (sct=0, sc=8) 00:22:46.443 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 [2024-11-20 09:08:11.673485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 [2024-11-20 09:08:11.674527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.444 Write completed with error (sct=0, sc=8) 00:22:46.444 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 [2024-11-20 09:08:11.677315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:46.445 NVMe io qpair process completion error 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 [2024-11-20 09:08:11.678464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:46.445 starting I/O failed: -6 00:22:46.445 starting I/O failed: -6 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 starting I/O failed: -6 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.445 [2024-11-20 09:08:11.679436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:46.445 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 [2024-11-20 09:08:11.680341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 [2024-11-20 09:08:11.681986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:46.446 NVMe io qpair process completion error 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 starting I/O failed: -6 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.446 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 [2024-11-20 09:08:11.683347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 [2024-11-20 09:08:11.684169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 [2024-11-20 09:08:11.685082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.447 Write completed with error (sct=0, sc=8) 00:22:46.447 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 [2024-11-20 09:08:11.687377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:46.448 NVMe io qpair process completion error 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 [2024-11-20 09:08:11.688626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.448 starting I/O failed: -6 00:22:46.448 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 [2024-11-20 09:08:11.689526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 [2024-11-20 09:08:11.690437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.449 starting I/O failed: -6 00:22:46.449 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 [2024-11-20 09:08:11.693163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:46.450 NVMe io qpair process completion error 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 [2024-11-20 09:08:11.694534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 [2024-11-20 09:08:11.695480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.450 Write completed with error (sct=0, sc=8) 00:22:46.450 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 [2024-11-20 09:08:11.696390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 [2024-11-20 09:08:11.697819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.451 NVMe io qpair process completion error 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 [2024-11-20 09:08:11.698743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 Write completed with error (sct=0, sc=8) 00:22:46.451 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 [2024-11-20 09:08:11.699567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:46.452 starting I/O failed: -6 00:22:46.452 starting I/O failed: -6 00:22:46.452 starting I/O failed: -6 00:22:46.452 starting I/O failed: -6 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 [2024-11-20 09:08:11.700728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.452 Write completed with error (sct=0, sc=8) 00:22:46.452 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 [2024-11-20 09:08:11.703678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:46.453 NVMe io qpair process completion error 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 [2024-11-20 09:08:11.704789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 [2024-11-20 09:08:11.705590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:46.453 starting I/O failed: -6 00:22:46.453 starting I/O failed: -6 00:22:46.453 starting I/O failed: -6 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 Write completed with error (sct=0, sc=8) 00:22:46.453 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 [2024-11-20 09:08:11.706710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 [2024-11-20 09:08:11.708327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.454 NVMe io qpair process completion error 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 [2024-11-20 09:08:11.709497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:46.454 starting I/O failed: -6 00:22:46.454 starting I/O failed: -6 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 starting I/O failed: -6 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.454 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 [2024-11-20 09:08:11.710488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 [2024-11-20 09:08:11.711467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.455 Write completed with error (sct=0, sc=8) 00:22:46.455 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 Write completed with error (sct=0, sc=8) 00:22:46.456 starting I/O failed: -6 00:22:46.456 [2024-11-20 09:08:11.714840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.456 NVMe io qpair process completion error 00:22:46.456 Initializing NVMe Controllers 00:22:46.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.456 Controller IO queue size 128, less than required. 00:22:46.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:46.456 Controller IO queue size 128, less than required. 00:22:46.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:46.456 Controller IO queue size 128, less than required. 00:22:46.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:46.456 Controller IO queue size 128, less than required. 00:22:46.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:46.456 Controller IO queue size 128, less than required. 00:22:46.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:46.456 Controller IO queue size 128, less than required. 00:22:46.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:46.456 Controller IO queue size 128, less than required. 00:22:46.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:46.456 Controller IO queue size 128, less than required. 00:22:46.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:46.456 Controller IO queue size 128, less than required. 00:22:46.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:46.456 Controller IO queue size 128, less than required. 00:22:46.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:46.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:46.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:46.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:46.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:46.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:46.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:46.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:46.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:46.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:46.456 Initialization complete. Launching workers. 00:22:46.456 ======================================================== 00:22:46.456 Latency(us) 00:22:46.456 Device Information : IOPS MiB/s Average min max 00:22:46.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1891.70 81.28 67680.06 897.17 118110.43 00:22:46.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1867.08 80.23 68589.36 643.58 121660.15 00:22:46.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1861.08 79.97 68834.73 841.51 122370.11 00:22:46.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1896.63 81.50 67581.26 675.95 120822.43 00:22:46.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1891.70 81.28 67780.92 711.74 122324.13 00:22:46.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1834.10 78.81 69942.50 924.09 124870.95 00:22:46.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1868.36 80.28 68700.71 924.43 127530.85 00:22:46.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1866.86 80.22 68777.49 741.15 122111.24 00:22:46.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1868.15 80.27 68767.95 796.88 132497.86 00:22:46.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1899.20 81.61 67669.19 380.01 121107.29 00:22:46.456 ======================================================== 00:22:46.456 Total : 18744.88 805.44 68425.17 380.01 132497.86 00:22:46.456 00:22:46.456 [2024-11-20 09:08:11.719004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1620720 is same with the state(6) to be set 00:22:46.456 [2024-11-20 09:08:11.719049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e560 is same with the state(6) to be set 00:22:46.456 [2024-11-20 09:08:11.719079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161fa70 is same with the state(6) to be set 00:22:46.456 [2024-11-20 09:08:11.719109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1620900 is same with the state(6) to be set 00:22:46.456 [2024-11-20 09:08:11.719139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e890 is same with the state(6) to be set 00:22:46.456 [2024-11-20 09:08:11.719176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ebc0 is same with the state(6) to be set 00:22:46.456 [2024-11-20 09:08:11.719208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1620ae0 is same with the state(6) to be set 00:22:46.456 [2024-11-20 09:08:11.719237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161eef0 is same with the state(6) to be set 00:22:46.456 [2024-11-20 09:08:11.719267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161f740 is same with the state(6) to be set 00:22:46.456 [2024-11-20 09:08:11.719295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161f410 is same with the state(6) to be set 00:22:46.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:46.456 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 762090 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 762090 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 762090 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:47.398 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:47.399 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.399 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:47.399 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.399 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:47.399 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.399 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:47.399 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.399 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.399 rmmod nvme_tcp 00:22:47.659 rmmod nvme_fabrics 00:22:47.659 rmmod nvme_keyring 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 761711 ']' 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 761711 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 761711 ']' 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 761711 00:22:47.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (761711) - No such process 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 761711 is not found' 00:22:47.659 Process with pid 761711 is not found 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.659 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.660 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.660 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.660 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.660 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.571 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.571 00:22:49.571 real 0m10.281s 00:22:49.571 user 0m28.122s 00:22:49.571 sys 0m3.870s 00:22:49.572 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.572 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:49.572 ************************************ 00:22:49.572 END TEST nvmf_shutdown_tc4 00:22:49.572 ************************************ 00:22:49.833 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:49.833 00:22:49.833 real 0m42.901s 00:22:49.833 user 1m43.206s 00:22:49.833 sys 0m13.615s 00:22:49.833 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.833 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:49.833 ************************************ 00:22:49.833 END TEST nvmf_shutdown 00:22:49.833 ************************************ 00:22:49.833 09:08:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:49.833 09:08:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:49.833 09:08:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.833 09:08:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:49.833 ************************************ 00:22:49.833 START TEST nvmf_nsid 00:22:49.833 ************************************ 00:22:49.833 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:49.833 * Looking for test storage... 00:22:49.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:49.833 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.833 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.833 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:50.094 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:50.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.095 --rc genhtml_branch_coverage=1 00:22:50.095 --rc genhtml_function_coverage=1 00:22:50.095 --rc genhtml_legend=1 00:22:50.095 --rc geninfo_all_blocks=1 00:22:50.095 --rc geninfo_unexecuted_blocks=1 00:22:50.095 00:22:50.095 ' 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:50.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.095 --rc genhtml_branch_coverage=1 00:22:50.095 --rc genhtml_function_coverage=1 00:22:50.095 --rc genhtml_legend=1 00:22:50.095 --rc geninfo_all_blocks=1 00:22:50.095 --rc geninfo_unexecuted_blocks=1 00:22:50.095 00:22:50.095 ' 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:50.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.095 --rc genhtml_branch_coverage=1 00:22:50.095 --rc genhtml_function_coverage=1 00:22:50.095 --rc genhtml_legend=1 00:22:50.095 --rc geninfo_all_blocks=1 00:22:50.095 --rc geninfo_unexecuted_blocks=1 00:22:50.095 00:22:50.095 ' 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:50.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.095 --rc genhtml_branch_coverage=1 00:22:50.095 --rc genhtml_function_coverage=1 00:22:50.095 --rc genhtml_legend=1 00:22:50.095 --rc geninfo_all_blocks=1 00:22:50.095 --rc geninfo_unexecuted_blocks=1 00:22:50.095 00:22:50.095 ' 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.095 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:58.235 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:58.235 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:58.235 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:58.235 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:22:58.235 00:22:58.235 --- 10.0.0.2 ping statistics --- 00:22:58.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.235 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:22:58.235 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:22:58.235 00:22:58.235 --- 10.0.0.1 ping statistics --- 00:22:58.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.236 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=767438 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 767438 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 767438 ']' 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.236 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:58.236 [2024-11-20 09:08:22.998364] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:22:58.236 [2024-11-20 09:08:22.998433] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.236 [2024-11-20 09:08:23.096710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.236 [2024-11-20 09:08:23.149276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.236 [2024-11-20 09:08:23.149328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.236 [2024-11-20 09:08:23.149337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.236 [2024-11-20 09:08:23.149344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.236 [2024-11-20 09:08:23.149350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.236 [2024-11-20 09:08:23.150099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=767768 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ce1cdd35-fef3-4a1f-8820-45e078719004 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=daa0fc67-7391-48d1-8547-81012ea3baa6 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=a665a567-2ead-46d1-90a9-458e2cab6f4a 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:58.497 null0 00:22:58.497 null1 00:22:58.497 null2 00:22:58.497 [2024-11-20 09:08:23.928846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.497 [2024-11-20 09:08:23.930826] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:22:58.497 [2024-11-20 09:08:23.930903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767768 ] 00:22:58.497 [2024-11-20 09:08:23.953139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 767768 /var/tmp/tgt2.sock 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 767768 ']' 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:58.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.497 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:58.757 [2024-11-20 09:08:24.025840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.757 [2024-11-20 09:08:24.078845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.017 09:08:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.017 09:08:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:59.017 09:08:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:59.277 [2024-11-20 09:08:24.647704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.277 [2024-11-20 09:08:24.663880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:59.277 nvme0n1 nvme0n2 00:22:59.277 nvme1n1 00:22:59.277 09:08:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:59.277 09:08:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:59.277 09:08:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:00.661 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ce1cdd35-fef3-4a1f-8820-45e078719004 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ce1cdd35fef34a1f882045e078719004 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CE1CDD35FEF34A1F882045E078719004 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CE1CDD35FEF34A1F882045E078719004 == \C\E\1\C\D\D\3\5\F\E\F\3\4\A\1\F\8\8\2\0\4\5\E\0\7\8\7\1\9\0\0\4 ]] 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:02.046 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid daa0fc67-7391-48d1-8547-81012ea3baa6 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=daa0fc67739148d1854781012ea3baa6 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DAA0FC67739148D1854781012EA3BAA6 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DAA0FC67739148D1854781012EA3BAA6 == \D\A\A\0\F\C\6\7\7\3\9\1\4\8\D\1\8\5\4\7\8\1\0\1\2\E\A\3\B\A\A\6 ]] 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid a665a567-2ead-46d1-90a9-458e2cab6f4a 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a665a5672ead46d190a9458e2cab6f4a 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A665A5672EAD46D190A9458E2CAB6F4A 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ A665A5672EAD46D190A9458E2CAB6F4A == \A\6\6\5\A\5\6\7\2\E\A\D\4\6\D\1\9\0\A\9\4\5\8\E\2\C\A\B\6\F\4\A ]] 00:23:02.047 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 767768 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 767768 ']' 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 767768 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 767768 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 767768' 00:23:02.306 killing process with pid 767768 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 767768 00:23:02.306 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 767768 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.567 rmmod nvme_tcp 00:23:02.567 rmmod nvme_fabrics 00:23:02.567 rmmod nvme_keyring 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 767438 ']' 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 767438 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 767438 ']' 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 767438 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 767438 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 767438' 00:23:02.567 killing process with pid 767438 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 767438 00:23:02.567 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 767438 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.827 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.737 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.737 00:23:04.737 real 0m14.972s 00:23:04.737 user 0m11.443s 00:23:04.737 sys 0m6.892s 00:23:04.737 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.737 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:04.737 ************************************ 00:23:04.737 END TEST nvmf_nsid 00:23:04.737 ************************************ 00:23:04.737 09:08:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:04.737 00:23:04.737 real 13m5.323s 00:23:04.737 user 27m17.593s 00:23:04.737 sys 3m56.664s 00:23:04.738 09:08:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.738 09:08:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:04.738 ************************************ 00:23:04.738 END TEST nvmf_target_extra 00:23:04.738 ************************************ 00:23:04.738 09:08:30 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:04.738 09:08:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:04.738 09:08:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.738 09:08:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:04.999 ************************************ 00:23:04.999 START TEST nvmf_host 00:23:04.999 ************************************ 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:04.999 * Looking for test storage... 00:23:04.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:04.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.999 --rc genhtml_branch_coverage=1 00:23:04.999 --rc genhtml_function_coverage=1 00:23:04.999 --rc genhtml_legend=1 00:23:04.999 --rc geninfo_all_blocks=1 00:23:04.999 --rc geninfo_unexecuted_blocks=1 00:23:04.999 00:23:04.999 ' 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:04.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.999 --rc genhtml_branch_coverage=1 00:23:04.999 --rc genhtml_function_coverage=1 00:23:04.999 --rc genhtml_legend=1 00:23:04.999 --rc geninfo_all_blocks=1 00:23:04.999 --rc geninfo_unexecuted_blocks=1 00:23:04.999 00:23:04.999 ' 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:04.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.999 --rc genhtml_branch_coverage=1 00:23:04.999 --rc genhtml_function_coverage=1 00:23:04.999 --rc genhtml_legend=1 00:23:04.999 --rc geninfo_all_blocks=1 00:23:04.999 --rc geninfo_unexecuted_blocks=1 00:23:04.999 00:23:04.999 ' 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:04.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.999 --rc genhtml_branch_coverage=1 00:23:04.999 --rc genhtml_function_coverage=1 00:23:04.999 --rc genhtml_legend=1 00:23:04.999 --rc geninfo_all_blocks=1 00:23:04.999 --rc geninfo_unexecuted_blocks=1 00:23:04.999 00:23:04.999 ' 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.999 09:08:30 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.262 ************************************ 00:23:05.262 START TEST nvmf_multicontroller 00:23:05.262 ************************************ 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:05.262 * Looking for test storage... 00:23:05.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:05.262 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.263 --rc genhtml_branch_coverage=1 00:23:05.263 --rc genhtml_function_coverage=1 00:23:05.263 --rc genhtml_legend=1 00:23:05.263 --rc geninfo_all_blocks=1 00:23:05.263 --rc geninfo_unexecuted_blocks=1 00:23:05.263 00:23:05.263 ' 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.263 --rc genhtml_branch_coverage=1 00:23:05.263 --rc genhtml_function_coverage=1 00:23:05.263 --rc genhtml_legend=1 00:23:05.263 --rc geninfo_all_blocks=1 00:23:05.263 --rc geninfo_unexecuted_blocks=1 00:23:05.263 00:23:05.263 ' 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.263 --rc genhtml_branch_coverage=1 00:23:05.263 --rc genhtml_function_coverage=1 00:23:05.263 --rc genhtml_legend=1 00:23:05.263 --rc geninfo_all_blocks=1 00:23:05.263 --rc geninfo_unexecuted_blocks=1 00:23:05.263 00:23:05.263 ' 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.263 --rc genhtml_branch_coverage=1 00:23:05.263 --rc genhtml_function_coverage=1 00:23:05.263 --rc genhtml_legend=1 00:23:05.263 --rc geninfo_all_blocks=1 00:23:05.263 --rc geninfo_unexecuted_blocks=1 00:23:05.263 00:23:05.263 ' 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.263 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.525 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.526 09:08:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.786 09:08:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.786 09:08:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.786 09:08:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.786 09:08:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.786 09:08:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.786 09:08:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.786 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:13.787 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:13.787 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:13.787 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:13.787 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:13.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:23:13.787 00:23:13.787 --- 10.0.0.2 ping statistics --- 00:23:13.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.787 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:23:13.787 00:23:13.787 --- 10.0.0.1 ping statistics --- 00:23:13.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.787 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=772907 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 772907 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 772907 ']' 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.787 09:08:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.787 [2024-11-20 09:08:38.419063] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:23:13.788 [2024-11-20 09:08:38.419132] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.788 [2024-11-20 09:08:38.519672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:13.788 [2024-11-20 09:08:38.571871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.788 [2024-11-20 09:08:38.571920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.788 [2024-11-20 09:08:38.571929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.788 [2024-11-20 09:08:38.571936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.788 [2024-11-20 09:08:38.571943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.788 [2024-11-20 09:08:38.573844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.788 [2024-11-20 09:08:38.574004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.788 [2024-11-20 09:08:38.574005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.788 [2024-11-20 09:08:39.297024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.788 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.050 Malloc0 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.050 [2024-11-20 09:08:39.373670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.050 [2024-11-20 09:08:39.385554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.050 Malloc1 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=772964 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 772964 /var/tmp/bdevperf.sock 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 772964 ']' 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.050 09:08:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.995 NVMe0n1 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.995 1 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.995 request: 00:23:14.995 { 00:23:14.995 "name": "NVMe0", 00:23:14.995 "trtype": "tcp", 00:23:14.995 "traddr": "10.0.0.2", 00:23:14.995 "adrfam": "ipv4", 00:23:14.995 "trsvcid": "4420", 00:23:14.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.995 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:14.995 "hostaddr": "10.0.0.1", 00:23:14.995 "prchk_reftag": false, 00:23:14.995 "prchk_guard": false, 00:23:14.995 "hdgst": false, 00:23:14.995 "ddgst": false, 00:23:14.995 "allow_unrecognized_csi": false, 00:23:14.995 "method": "bdev_nvme_attach_controller", 00:23:14.995 "req_id": 1 00:23:14.995 } 00:23:14.995 Got JSON-RPC error response 00:23:14.995 response: 00:23:14.995 { 00:23:14.995 "code": -114, 00:23:14.995 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:14.995 } 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.995 request: 00:23:14.995 { 00:23:14.995 "name": "NVMe0", 00:23:14.995 "trtype": "tcp", 00:23:14.995 "traddr": "10.0.0.2", 00:23:14.995 "adrfam": "ipv4", 00:23:14.995 "trsvcid": "4420", 00:23:14.995 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:14.995 "hostaddr": "10.0.0.1", 00:23:14.995 "prchk_reftag": false, 00:23:14.995 "prchk_guard": false, 00:23:14.995 "hdgst": false, 00:23:14.995 "ddgst": false, 00:23:14.995 "allow_unrecognized_csi": false, 00:23:14.995 "method": "bdev_nvme_attach_controller", 00:23:14.995 "req_id": 1 00:23:14.995 } 00:23:14.995 Got JSON-RPC error response 00:23:14.995 response: 00:23:14.995 { 00:23:14.995 "code": -114, 00:23:14.995 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:14.995 } 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:14.995 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:14.996 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:14.996 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:14.996 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:14.996 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:14.996 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.996 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:14.996 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.996 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:15.257 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.257 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.257 request: 00:23:15.257 { 00:23:15.257 "name": "NVMe0", 00:23:15.257 "trtype": "tcp", 00:23:15.257 "traddr": "10.0.0.2", 00:23:15.257 "adrfam": "ipv4", 00:23:15.257 "trsvcid": "4420", 00:23:15.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.258 "hostaddr": "10.0.0.1", 00:23:15.258 "prchk_reftag": false, 00:23:15.258 "prchk_guard": false, 00:23:15.258 "hdgst": false, 00:23:15.258 "ddgst": false, 00:23:15.258 "multipath": "disable", 00:23:15.258 "allow_unrecognized_csi": false, 00:23:15.258 "method": "bdev_nvme_attach_controller", 00:23:15.258 "req_id": 1 00:23:15.258 } 00:23:15.258 Got JSON-RPC error response 00:23:15.258 response: 00:23:15.258 { 00:23:15.258 "code": -114, 00:23:15.258 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:15.258 } 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.258 request: 00:23:15.258 { 00:23:15.258 "name": "NVMe0", 00:23:15.258 "trtype": "tcp", 00:23:15.258 "traddr": "10.0.0.2", 00:23:15.258 "adrfam": "ipv4", 00:23:15.258 "trsvcid": "4420", 00:23:15.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.258 "hostaddr": "10.0.0.1", 00:23:15.258 "prchk_reftag": false, 00:23:15.258 "prchk_guard": false, 00:23:15.258 "hdgst": false, 00:23:15.258 "ddgst": false, 00:23:15.258 "multipath": "failover", 00:23:15.258 "allow_unrecognized_csi": false, 00:23:15.258 "method": "bdev_nvme_attach_controller", 00:23:15.258 "req_id": 1 00:23:15.258 } 00:23:15.258 Got JSON-RPC error response 00:23:15.258 response: 00:23:15.258 { 00:23:15.258 "code": -114, 00:23:15.258 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:15.258 } 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.258 NVMe0n1 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.258 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:15.258 09:08:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:16.645 { 00:23:16.645 "results": [ 00:23:16.645 { 00:23:16.645 "job": "NVMe0n1", 00:23:16.645 "core_mask": "0x1", 00:23:16.645 "workload": "write", 00:23:16.645 "status": "finished", 00:23:16.645 "queue_depth": 128, 00:23:16.645 "io_size": 4096, 00:23:16.645 "runtime": 1.006111, 00:23:16.645 "iops": 24169.301399149797, 00:23:16.645 "mibps": 94.4113335904289, 00:23:16.645 "io_failed": 0, 00:23:16.645 "io_timeout": 0, 00:23:16.645 "avg_latency_us": 5280.038829351208, 00:23:16.646 "min_latency_us": 2116.266666666667, 00:23:16.646 "max_latency_us": 15182.506666666666 00:23:16.646 } 00:23:16.646 ], 00:23:16.646 "core_count": 1 00:23:16.646 } 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 772964 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 772964 ']' 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 772964 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772964 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772964' 00:23:16.646 killing process with pid 772964 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 772964 00:23:16.646 09:08:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 772964 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:16.646 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:16.646 [2024-11-20 09:08:39.515314] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:23:16.646 [2024-11-20 09:08:39.515387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772964 ] 00:23:16.646 [2024-11-20 09:08:39.607912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.646 [2024-11-20 09:08:39.662620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.646 [2024-11-20 09:08:40.730254] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 24294658-59cf-4c01-bbc6-70d484fdc77d already exists 00:23:16.646 [2024-11-20 09:08:40.730300] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:24294658-59cf-4c01-bbc6-70d484fdc77d alias for bdev NVMe1n1 00:23:16.646 [2024-11-20 09:08:40.730309] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:16.646 Running I/O for 1 seconds... 00:23:16.646 24141.00 IOPS, 94.30 MiB/s 00:23:16.646 Latency(us) 00:23:16.646 [2024-11-20T08:08:42.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.646 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:16.646 NVMe0n1 : 1.01 24169.30 94.41 0.00 0.00 5280.04 2116.27 15182.51 00:23:16.646 [2024-11-20T08:08:42.175Z] =================================================================================================================== 00:23:16.646 [2024-11-20T08:08:42.175Z] Total : 24169.30 94.41 0.00 0.00 5280.04 2116.27 15182.51 00:23:16.646 Received shutdown signal, test time was about 1.000000 seconds 00:23:16.646 00:23:16.646 Latency(us) 00:23:16.646 [2024-11-20T08:08:42.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.646 [2024-11-20T08:08:42.175Z] =================================================================================================================== 00:23:16.646 [2024-11-20T08:08:42.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.646 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:16.646 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:16.646 rmmod nvme_tcp 00:23:16.646 rmmod nvme_fabrics 00:23:16.646 rmmod nvme_keyring 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 772907 ']' 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 772907 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 772907 ']' 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 772907 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772907 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772907' 00:23:16.907 killing process with pid 772907 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 772907 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 772907 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:16.907 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:16.908 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:16.908 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:16.908 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.908 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.908 09:08:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.456 09:08:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:19.456 00:23:19.456 real 0m13.890s 00:23:19.456 user 0m16.457s 00:23:19.456 sys 0m6.624s 00:23:19.456 09:08:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.456 09:08:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.456 ************************************ 00:23:19.456 END TEST nvmf_multicontroller 00:23:19.456 ************************************ 00:23:19.456 09:08:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:19.456 09:08:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:19.456 09:08:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:19.456 09:08:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.456 ************************************ 00:23:19.456 START TEST nvmf_aer 00:23:19.456 ************************************ 00:23:19.456 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:19.456 * Looking for test storage... 00:23:19.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:19.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.457 --rc genhtml_branch_coverage=1 00:23:19.457 --rc genhtml_function_coverage=1 00:23:19.457 --rc genhtml_legend=1 00:23:19.457 --rc geninfo_all_blocks=1 00:23:19.457 --rc geninfo_unexecuted_blocks=1 00:23:19.457 00:23:19.457 ' 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:19.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.457 --rc genhtml_branch_coverage=1 00:23:19.457 --rc genhtml_function_coverage=1 00:23:19.457 --rc genhtml_legend=1 00:23:19.457 --rc geninfo_all_blocks=1 00:23:19.457 --rc geninfo_unexecuted_blocks=1 00:23:19.457 00:23:19.457 ' 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:19.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.457 --rc genhtml_branch_coverage=1 00:23:19.457 --rc genhtml_function_coverage=1 00:23:19.457 --rc genhtml_legend=1 00:23:19.457 --rc geninfo_all_blocks=1 00:23:19.457 --rc geninfo_unexecuted_blocks=1 00:23:19.457 00:23:19.457 ' 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:19.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.457 --rc genhtml_branch_coverage=1 00:23:19.457 --rc genhtml_function_coverage=1 00:23:19.457 --rc genhtml_legend=1 00:23:19.457 --rc geninfo_all_blocks=1 00:23:19.457 --rc geninfo_unexecuted_blocks=1 00:23:19.457 00:23:19.457 ' 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.457 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:19.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:19.458 09:08:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:27.608 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:27.609 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:27.609 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:27.609 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:27.609 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.609 09:08:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:27.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:23:27.609 00:23:27.609 --- 10.0.0.2 ping statistics --- 00:23:27.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.609 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:23:27.609 00:23:27.609 --- 10.0.0.1 ping statistics --- 00:23:27.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.609 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=777697 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 777697 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 777697 ']' 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.609 09:08:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.610 [2024-11-20 09:08:52.304337] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:23:27.610 [2024-11-20 09:08:52.304402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.610 [2024-11-20 09:08:52.402610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.610 [2024-11-20 09:08:52.456566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.610 [2024-11-20 09:08:52.456622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.610 [2024-11-20 09:08:52.456631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.610 [2024-11-20 09:08:52.456639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.610 [2024-11-20 09:08:52.456645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.610 [2024-11-20 09:08:52.458721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.610 [2024-11-20 09:08:52.458882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.610 [2024-11-20 09:08:52.459042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.610 [2024-11-20 09:08:52.459043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.610 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.610 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:27.610 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.610 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.610 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.871 [2024-11-20 09:08:53.182941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.871 Malloc0 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.871 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.872 [2024-11-20 09:08:53.259219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.872 [ 00:23:27.872 { 00:23:27.872 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:27.872 "subtype": "Discovery", 00:23:27.872 "listen_addresses": [], 00:23:27.872 "allow_any_host": true, 00:23:27.872 "hosts": [] 00:23:27.872 }, 00:23:27.872 { 00:23:27.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.872 "subtype": "NVMe", 00:23:27.872 "listen_addresses": [ 00:23:27.872 { 00:23:27.872 "trtype": "TCP", 00:23:27.872 "adrfam": "IPv4", 00:23:27.872 "traddr": "10.0.0.2", 00:23:27.872 "trsvcid": "4420" 00:23:27.872 } 00:23:27.872 ], 00:23:27.872 "allow_any_host": true, 00:23:27.872 "hosts": [], 00:23:27.872 "serial_number": "SPDK00000000000001", 00:23:27.872 "model_number": "SPDK bdev Controller", 00:23:27.872 "max_namespaces": 2, 00:23:27.872 "min_cntlid": 1, 00:23:27.872 "max_cntlid": 65519, 00:23:27.872 "namespaces": [ 00:23:27.872 { 00:23:27.872 "nsid": 1, 00:23:27.872 "bdev_name": "Malloc0", 00:23:27.872 "name": "Malloc0", 00:23:27.872 "nguid": "87D5B17B209F42438145AB63C11887C0", 00:23:27.872 "uuid": "87d5b17b-209f-4243-8145-ab63c11887c0" 00:23:27.872 } 00:23:27.872 ] 00:23:27.872 } 00:23:27.872 ] 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=777979 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:27.872 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.132 Malloc1 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.132 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.393 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.393 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:28.393 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.393 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.393 Asynchronous Event Request test 00:23:28.393 Attaching to 10.0.0.2 00:23:28.393 Attached to 10.0.0.2 00:23:28.393 Registering asynchronous event callbacks... 00:23:28.393 Starting namespace attribute notice tests for all controllers... 00:23:28.393 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:28.393 aer_cb - Changed Namespace 00:23:28.393 Cleaning up... 00:23:28.393 [ 00:23:28.393 { 00:23:28.393 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:28.393 "subtype": "Discovery", 00:23:28.393 "listen_addresses": [], 00:23:28.393 "allow_any_host": true, 00:23:28.393 "hosts": [] 00:23:28.393 }, 00:23:28.393 { 00:23:28.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.393 "subtype": "NVMe", 00:23:28.393 "listen_addresses": [ 00:23:28.393 { 00:23:28.393 "trtype": "TCP", 00:23:28.393 "adrfam": "IPv4", 00:23:28.393 "traddr": "10.0.0.2", 00:23:28.393 "trsvcid": "4420" 00:23:28.393 } 00:23:28.393 ], 00:23:28.393 "allow_any_host": true, 00:23:28.393 "hosts": [], 00:23:28.393 "serial_number": "SPDK00000000000001", 00:23:28.393 "model_number": "SPDK bdev Controller", 00:23:28.393 "max_namespaces": 2, 00:23:28.393 "min_cntlid": 1, 00:23:28.393 "max_cntlid": 65519, 00:23:28.393 "namespaces": [ 00:23:28.393 { 00:23:28.393 "nsid": 1, 00:23:28.393 "bdev_name": "Malloc0", 00:23:28.393 "name": "Malloc0", 00:23:28.393 "nguid": "87D5B17B209F42438145AB63C11887C0", 00:23:28.393 "uuid": "87d5b17b-209f-4243-8145-ab63c11887c0" 00:23:28.393 }, 00:23:28.393 { 00:23:28.393 "nsid": 2, 00:23:28.393 "bdev_name": "Malloc1", 00:23:28.393 "name": "Malloc1", 00:23:28.393 "nguid": "EEA07D7B943C47C885C16E3CA30BAD9A", 00:23:28.393 "uuid": "eea07d7b-943c-47c8-85c1-6e3ca30bad9a" 00:23:28.393 } 00:23:28.393 ] 00:23:28.393 } 00:23:28.393 ] 00:23:28.393 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.393 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 777979 00:23:28.393 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.394 rmmod nvme_tcp 00:23:28.394 rmmod nvme_fabrics 00:23:28.394 rmmod nvme_keyring 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 777697 ']' 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 777697 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 777697 ']' 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 777697 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777697 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777697' 00:23:28.394 killing process with pid 777697 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 777697 00:23:28.394 09:08:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 777697 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.655 09:08:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:31.204 00:23:31.204 real 0m11.591s 00:23:31.204 user 0m8.557s 00:23:31.204 sys 0m6.177s 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.204 ************************************ 00:23:31.204 END TEST nvmf_aer 00:23:31.204 ************************************ 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.204 ************************************ 00:23:31.204 START TEST nvmf_async_init 00:23:31.204 ************************************ 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:31.204 * Looking for test storage... 00:23:31.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:31.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.204 --rc genhtml_branch_coverage=1 00:23:31.204 --rc genhtml_function_coverage=1 00:23:31.204 --rc genhtml_legend=1 00:23:31.204 --rc geninfo_all_blocks=1 00:23:31.204 --rc geninfo_unexecuted_blocks=1 00:23:31.204 00:23:31.204 ' 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:31.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.204 --rc genhtml_branch_coverage=1 00:23:31.204 --rc genhtml_function_coverage=1 00:23:31.204 --rc genhtml_legend=1 00:23:31.204 --rc geninfo_all_blocks=1 00:23:31.204 --rc geninfo_unexecuted_blocks=1 00:23:31.204 00:23:31.204 ' 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:31.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.204 --rc genhtml_branch_coverage=1 00:23:31.204 --rc genhtml_function_coverage=1 00:23:31.204 --rc genhtml_legend=1 00:23:31.204 --rc geninfo_all_blocks=1 00:23:31.204 --rc geninfo_unexecuted_blocks=1 00:23:31.204 00:23:31.204 ' 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:31.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.204 --rc genhtml_branch_coverage=1 00:23:31.204 --rc genhtml_function_coverage=1 00:23:31.204 --rc genhtml_legend=1 00:23:31.204 --rc geninfo_all_blocks=1 00:23:31.204 --rc geninfo_unexecuted_blocks=1 00:23:31.204 00:23:31.204 ' 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.204 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=76b08d896f4945cdb38a5606ecd462e6 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.205 09:08:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:39.351 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:39.351 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.351 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:39.351 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:39.352 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:23:39.352 00:23:39.352 --- 10.0.0.2 ping statistics --- 00:23:39.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.352 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:23:39.352 00:23:39.352 --- 10.0.0.1 ping statistics --- 00:23:39.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.352 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=782412 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 782412 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 782412 ']' 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.352 09:09:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.352 [2024-11-20 09:09:04.043532] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:23:39.352 [2024-11-20 09:09:04.043597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.352 [2024-11-20 09:09:04.142891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.352 [2024-11-20 09:09:04.193983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.352 [2024-11-20 09:09:04.194034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.352 [2024-11-20 09:09:04.194042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.352 [2024-11-20 09:09:04.194049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.352 [2024-11-20 09:09:04.194056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.352 [2024-11-20 09:09:04.194827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.352 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.352 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:39.352 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.352 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.352 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.615 [2024-11-20 09:09:04.908780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.615 null0 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 76b08d896f4945cdb38a5606ecd462e6 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.615 [2024-11-20 09:09:04.969153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.615 09:09:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.877 nvme0n1 00:23:39.877 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.877 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:39.877 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.877 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.877 [ 00:23:39.877 { 00:23:39.877 "name": "nvme0n1", 00:23:39.877 "aliases": [ 00:23:39.877 "76b08d89-6f49-45cd-b38a-5606ecd462e6" 00:23:39.877 ], 00:23:39.877 "product_name": "NVMe disk", 00:23:39.877 "block_size": 512, 00:23:39.877 "num_blocks": 2097152, 00:23:39.877 "uuid": "76b08d89-6f49-45cd-b38a-5606ecd462e6", 00:23:39.877 "numa_id": 0, 00:23:39.877 "assigned_rate_limits": { 00:23:39.877 "rw_ios_per_sec": 0, 00:23:39.877 "rw_mbytes_per_sec": 0, 00:23:39.877 "r_mbytes_per_sec": 0, 00:23:39.877 "w_mbytes_per_sec": 0 00:23:39.877 }, 00:23:39.877 "claimed": false, 00:23:39.877 "zoned": false, 00:23:39.877 "supported_io_types": { 00:23:39.877 "read": true, 00:23:39.877 "write": true, 00:23:39.877 "unmap": false, 00:23:39.877 "flush": true, 00:23:39.877 "reset": true, 00:23:39.877 "nvme_admin": true, 00:23:39.877 "nvme_io": true, 00:23:39.877 "nvme_io_md": false, 00:23:39.877 "write_zeroes": true, 00:23:39.877 "zcopy": false, 00:23:39.877 "get_zone_info": false, 00:23:39.877 "zone_management": false, 00:23:39.877 "zone_append": false, 00:23:39.877 "compare": true, 00:23:39.877 "compare_and_write": true, 00:23:39.877 "abort": true, 00:23:39.877 "seek_hole": false, 00:23:39.877 "seek_data": false, 00:23:39.877 "copy": true, 00:23:39.877 "nvme_iov_md": false 00:23:39.877 }, 00:23:39.877 "memory_domains": [ 00:23:39.877 { 00:23:39.877 "dma_device_id": "system", 00:23:39.877 "dma_device_type": 1 00:23:39.877 } 00:23:39.877 ], 00:23:39.877 "driver_specific": { 00:23:39.877 "nvme": [ 00:23:39.877 { 00:23:39.877 "trid": { 00:23:39.877 "trtype": "TCP", 00:23:39.877 "adrfam": "IPv4", 00:23:39.877 "traddr": "10.0.0.2", 00:23:39.877 "trsvcid": "4420", 00:23:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:39.877 }, 00:23:39.877 "ctrlr_data": { 00:23:39.877 "cntlid": 1, 00:23:39.877 "vendor_id": "0x8086", 00:23:39.877 "model_number": "SPDK bdev Controller", 00:23:39.877 "serial_number": "00000000000000000000", 00:23:39.877 "firmware_revision": "25.01", 00:23:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.877 "oacs": { 00:23:39.877 "security": 0, 00:23:39.877 "format": 0, 00:23:39.877 "firmware": 0, 00:23:39.877 "ns_manage": 0 00:23:39.877 }, 00:23:39.877 "multi_ctrlr": true, 00:23:39.877 "ana_reporting": false 00:23:39.878 }, 00:23:39.878 "vs": { 00:23:39.878 "nvme_version": "1.3" 00:23:39.878 }, 00:23:39.878 "ns_data": { 00:23:39.878 "id": 1, 00:23:39.878 "can_share": true 00:23:39.878 } 00:23:39.878 } 00:23:39.878 ], 00:23:39.878 "mp_policy": "active_passive" 00:23:39.878 } 00:23:39.878 } 00:23:39.878 ] 00:23:39.878 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.878 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:39.878 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.878 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.878 [2024-11-20 09:09:05.245626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:39.878 [2024-11-20 09:09:05.245711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1100ce0 (9): Bad file descriptor 00:23:39.878 [2024-11-20 09:09:05.377270] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:39.878 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.878 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:39.878 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.878 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.878 [ 00:23:39.878 { 00:23:39.878 "name": "nvme0n1", 00:23:39.878 "aliases": [ 00:23:39.878 "76b08d89-6f49-45cd-b38a-5606ecd462e6" 00:23:39.878 ], 00:23:39.878 "product_name": "NVMe disk", 00:23:39.878 "block_size": 512, 00:23:39.878 "num_blocks": 2097152, 00:23:39.878 "uuid": "76b08d89-6f49-45cd-b38a-5606ecd462e6", 00:23:39.878 "numa_id": 0, 00:23:39.878 "assigned_rate_limits": { 00:23:39.878 "rw_ios_per_sec": 0, 00:23:39.878 "rw_mbytes_per_sec": 0, 00:23:39.878 "r_mbytes_per_sec": 0, 00:23:39.878 "w_mbytes_per_sec": 0 00:23:39.878 }, 00:23:39.878 "claimed": false, 00:23:39.878 "zoned": false, 00:23:39.878 "supported_io_types": { 00:23:39.878 "read": true, 00:23:39.878 "write": true, 00:23:39.878 "unmap": false, 00:23:39.878 "flush": true, 00:23:39.878 "reset": true, 00:23:39.878 "nvme_admin": true, 00:23:39.878 "nvme_io": true, 00:23:39.878 "nvme_io_md": false, 00:23:39.878 "write_zeroes": true, 00:23:39.878 "zcopy": false, 00:23:39.878 "get_zone_info": false, 00:23:39.878 "zone_management": false, 00:23:39.878 "zone_append": false, 00:23:39.878 "compare": true, 00:23:39.878 "compare_and_write": true, 00:23:39.878 "abort": true, 00:23:39.878 "seek_hole": false, 00:23:39.878 "seek_data": false, 00:23:39.878 "copy": true, 00:23:39.878 "nvme_iov_md": false 00:23:39.878 }, 00:23:39.878 "memory_domains": [ 00:23:39.878 { 00:23:39.878 "dma_device_id": "system", 00:23:39.878 "dma_device_type": 1 00:23:39.878 } 00:23:39.878 ], 00:23:39.878 "driver_specific": { 00:23:39.878 "nvme": [ 00:23:39.878 { 00:23:39.878 "trid": { 00:23:39.878 "trtype": "TCP", 00:23:39.878 "adrfam": "IPv4", 00:23:39.878 "traddr": "10.0.0.2", 00:23:39.878 "trsvcid": "4420", 00:23:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:39.878 }, 00:23:39.878 "ctrlr_data": { 00:23:39.878 "cntlid": 2, 00:23:39.878 "vendor_id": "0x8086", 00:23:39.878 "model_number": "SPDK bdev Controller", 00:23:39.878 "serial_number": "00000000000000000000", 00:23:39.878 "firmware_revision": "25.01", 00:23:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.878 "oacs": { 00:23:39.878 "security": 0, 00:23:39.878 "format": 0, 00:23:39.878 "firmware": 0, 00:23:39.878 "ns_manage": 0 00:23:39.878 }, 00:23:39.878 "multi_ctrlr": true, 00:23:39.878 "ana_reporting": false 00:23:39.878 }, 00:23:39.878 "vs": { 00:23:39.878 "nvme_version": "1.3" 00:23:39.878 }, 00:23:39.878 "ns_data": { 00:23:39.878 "id": 1, 00:23:39.878 "can_share": true 00:23:39.878 } 00:23:39.878 } 00:23:39.878 ], 00:23:39.878 "mp_policy": "active_passive" 00:23:39.878 } 00:23:39.878 } 00:23:39.878 ] 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.w4LzNjvAIB 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.w4LzNjvAIB 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.w4LzNjvAIB 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.140 [2024-11-20 09:09:05.466314] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.140 [2024-11-20 09:09:05.466475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.140 [2024-11-20 09:09:05.490394] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.140 nvme0n1 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.140 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.140 [ 00:23:40.140 { 00:23:40.141 "name": "nvme0n1", 00:23:40.141 "aliases": [ 00:23:40.141 "76b08d89-6f49-45cd-b38a-5606ecd462e6" 00:23:40.141 ], 00:23:40.141 "product_name": "NVMe disk", 00:23:40.141 "block_size": 512, 00:23:40.141 "num_blocks": 2097152, 00:23:40.141 "uuid": "76b08d89-6f49-45cd-b38a-5606ecd462e6", 00:23:40.141 "numa_id": 0, 00:23:40.141 "assigned_rate_limits": { 00:23:40.141 "rw_ios_per_sec": 0, 00:23:40.141 "rw_mbytes_per_sec": 0, 00:23:40.141 "r_mbytes_per_sec": 0, 00:23:40.141 "w_mbytes_per_sec": 0 00:23:40.141 }, 00:23:40.141 "claimed": false, 00:23:40.141 "zoned": false, 00:23:40.141 "supported_io_types": { 00:23:40.141 "read": true, 00:23:40.141 "write": true, 00:23:40.141 "unmap": false, 00:23:40.141 "flush": true, 00:23:40.141 "reset": true, 00:23:40.141 "nvme_admin": true, 00:23:40.141 "nvme_io": true, 00:23:40.141 "nvme_io_md": false, 00:23:40.141 "write_zeroes": true, 00:23:40.141 "zcopy": false, 00:23:40.141 "get_zone_info": false, 00:23:40.141 "zone_management": false, 00:23:40.141 "zone_append": false, 00:23:40.141 "compare": true, 00:23:40.141 "compare_and_write": true, 00:23:40.141 "abort": true, 00:23:40.141 "seek_hole": false, 00:23:40.141 "seek_data": false, 00:23:40.141 "copy": true, 00:23:40.141 "nvme_iov_md": false 00:23:40.141 }, 00:23:40.141 "memory_domains": [ 00:23:40.141 { 00:23:40.141 "dma_device_id": "system", 00:23:40.141 "dma_device_type": 1 00:23:40.141 } 00:23:40.141 ], 00:23:40.141 "driver_specific": { 00:23:40.141 "nvme": [ 00:23:40.141 { 00:23:40.141 "trid": { 00:23:40.141 "trtype": "TCP", 00:23:40.141 "adrfam": "IPv4", 00:23:40.141 "traddr": "10.0.0.2", 00:23:40.141 "trsvcid": "4421", 00:23:40.141 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.141 }, 00:23:40.141 "ctrlr_data": { 00:23:40.141 "cntlid": 3, 00:23:40.141 "vendor_id": "0x8086", 00:23:40.141 "model_number": "SPDK bdev Controller", 00:23:40.141 "serial_number": "00000000000000000000", 00:23:40.141 "firmware_revision": "25.01", 00:23:40.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.141 "oacs": { 00:23:40.141 "security": 0, 00:23:40.141 "format": 0, 00:23:40.141 "firmware": 0, 00:23:40.141 "ns_manage": 0 00:23:40.141 }, 00:23:40.141 "multi_ctrlr": true, 00:23:40.141 "ana_reporting": false 00:23:40.141 }, 00:23:40.141 "vs": { 00:23:40.141 "nvme_version": "1.3" 00:23:40.141 }, 00:23:40.141 "ns_data": { 00:23:40.141 "id": 1, 00:23:40.141 "can_share": true 00:23:40.141 } 00:23:40.141 } 00:23:40.141 ], 00:23:40.141 "mp_policy": "active_passive" 00:23:40.141 } 00:23:40.141 } 00:23:40.141 ] 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.w4LzNjvAIB 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.141 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.141 rmmod nvme_tcp 00:23:40.141 rmmod nvme_fabrics 00:23:40.141 rmmod nvme_keyring 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 782412 ']' 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 782412 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 782412 ']' 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 782412 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 782412 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 782412' 00:23:40.403 killing process with pid 782412 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 782412 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 782412 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.403 09:09:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.956 09:09:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.956 00:23:42.956 real 0m11.760s 00:23:42.956 user 0m4.196s 00:23:42.956 sys 0m6.145s 00:23:42.956 09:09:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.956 09:09:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.956 ************************************ 00:23:42.956 END TEST nvmf_async_init 00:23:42.956 ************************************ 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.956 ************************************ 00:23:42.956 START TEST dma 00:23:42.956 ************************************ 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:42.956 * Looking for test storage... 00:23:42.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.956 --rc genhtml_branch_coverage=1 00:23:42.956 --rc genhtml_function_coverage=1 00:23:42.956 --rc genhtml_legend=1 00:23:42.956 --rc geninfo_all_blocks=1 00:23:42.956 --rc geninfo_unexecuted_blocks=1 00:23:42.956 00:23:42.956 ' 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.956 --rc genhtml_branch_coverage=1 00:23:42.956 --rc genhtml_function_coverage=1 00:23:42.956 --rc genhtml_legend=1 00:23:42.956 --rc geninfo_all_blocks=1 00:23:42.956 --rc geninfo_unexecuted_blocks=1 00:23:42.956 00:23:42.956 ' 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.956 --rc genhtml_branch_coverage=1 00:23:42.956 --rc genhtml_function_coverage=1 00:23:42.956 --rc genhtml_legend=1 00:23:42.956 --rc geninfo_all_blocks=1 00:23:42.956 --rc geninfo_unexecuted_blocks=1 00:23:42.956 00:23:42.956 ' 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.956 --rc genhtml_branch_coverage=1 00:23:42.956 --rc genhtml_function_coverage=1 00:23:42.956 --rc genhtml_legend=1 00:23:42.956 --rc geninfo_all_blocks=1 00:23:42.956 --rc geninfo_unexecuted_blocks=1 00:23:42.956 00:23:42.956 ' 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.956 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:42.957 00:23:42.957 real 0m0.237s 00:23:42.957 user 0m0.145s 00:23:42.957 sys 0m0.107s 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:42.957 ************************************ 00:23:42.957 END TEST dma 00:23:42.957 ************************************ 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.957 ************************************ 00:23:42.957 START TEST nvmf_identify 00:23:42.957 ************************************ 00:23:42.957 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:42.957 * Looking for test storage... 00:23:43.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:43.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.219 --rc genhtml_branch_coverage=1 00:23:43.219 --rc genhtml_function_coverage=1 00:23:43.219 --rc genhtml_legend=1 00:23:43.219 --rc geninfo_all_blocks=1 00:23:43.219 --rc geninfo_unexecuted_blocks=1 00:23:43.219 00:23:43.219 ' 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:43.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.219 --rc genhtml_branch_coverage=1 00:23:43.219 --rc genhtml_function_coverage=1 00:23:43.219 --rc genhtml_legend=1 00:23:43.219 --rc geninfo_all_blocks=1 00:23:43.219 --rc geninfo_unexecuted_blocks=1 00:23:43.219 00:23:43.219 ' 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:43.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.219 --rc genhtml_branch_coverage=1 00:23:43.219 --rc genhtml_function_coverage=1 00:23:43.219 --rc genhtml_legend=1 00:23:43.219 --rc geninfo_all_blocks=1 00:23:43.219 --rc geninfo_unexecuted_blocks=1 00:23:43.219 00:23:43.219 ' 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:43.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.219 --rc genhtml_branch_coverage=1 00:23:43.219 --rc genhtml_function_coverage=1 00:23:43.219 --rc genhtml_legend=1 00:23:43.219 --rc geninfo_all_blocks=1 00:23:43.219 --rc geninfo_unexecuted_blocks=1 00:23:43.219 00:23:43.219 ' 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.219 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:43.220 09:09:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:51.363 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:51.363 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:51.363 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:51.363 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.363 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.364 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.364 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.364 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.364 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.364 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.364 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.364 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.364 09:09:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:23:51.364 00:23:51.364 --- 10.0.0.2 ping statistics --- 00:23:51.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.364 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:23:51.364 00:23:51.364 --- 10.0.0.1 ping statistics --- 00:23:51.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.364 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=787468 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 787468 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 787468 ']' 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.364 09:09:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.364 [2024-11-20 09:09:16.227459] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:23:51.364 [2024-11-20 09:09:16.227532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.364 [2024-11-20 09:09:16.328369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.364 [2024-11-20 09:09:16.383611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.364 [2024-11-20 09:09:16.383663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.364 [2024-11-20 09:09:16.383672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.364 [2024-11-20 09:09:16.383680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.364 [2024-11-20 09:09:16.383686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.364 [2024-11-20 09:09:16.386097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.364 [2024-11-20 09:09:16.386264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.364 [2024-11-20 09:09:16.386557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.364 [2024-11-20 09:09:16.386560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.625 [2024-11-20 09:09:17.054883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.625 Malloc0 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.625 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.889 [2024-11-20 09:09:17.178891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.889 [ 00:23:51.889 { 00:23:51.889 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:51.889 "subtype": "Discovery", 00:23:51.889 "listen_addresses": [ 00:23:51.889 { 00:23:51.889 "trtype": "TCP", 00:23:51.889 "adrfam": "IPv4", 00:23:51.889 "traddr": "10.0.0.2", 00:23:51.889 "trsvcid": "4420" 00:23:51.889 } 00:23:51.889 ], 00:23:51.889 "allow_any_host": true, 00:23:51.889 "hosts": [] 00:23:51.889 }, 00:23:51.889 { 00:23:51.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.889 "subtype": "NVMe", 00:23:51.889 "listen_addresses": [ 00:23:51.889 { 00:23:51.889 "trtype": "TCP", 00:23:51.889 "adrfam": "IPv4", 00:23:51.889 "traddr": "10.0.0.2", 00:23:51.889 "trsvcid": "4420" 00:23:51.889 } 00:23:51.889 ], 00:23:51.889 "allow_any_host": true, 00:23:51.889 "hosts": [], 00:23:51.889 "serial_number": "SPDK00000000000001", 00:23:51.889 "model_number": "SPDK bdev Controller", 00:23:51.889 "max_namespaces": 32, 00:23:51.889 "min_cntlid": 1, 00:23:51.889 "max_cntlid": 65519, 00:23:51.889 "namespaces": [ 00:23:51.889 { 00:23:51.889 "nsid": 1, 00:23:51.889 "bdev_name": "Malloc0", 00:23:51.889 "name": "Malloc0", 00:23:51.889 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:51.889 "eui64": "ABCDEF0123456789", 00:23:51.889 "uuid": "97b24774-06fd-41d7-bc13-9f0ba6737e81" 00:23:51.889 } 00:23:51.889 ] 00:23:51.889 } 00:23:51.889 ] 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.889 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:51.889 [2024-11-20 09:09:17.244599] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:23:51.889 [2024-11-20 09:09:17.244676] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787641 ] 00:23:51.889 [2024-11-20 09:09:17.301876] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:51.889 [2024-11-20 09:09:17.301953] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:51.889 [2024-11-20 09:09:17.301960] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:51.889 [2024-11-20 09:09:17.301974] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:51.889 [2024-11-20 09:09:17.301988] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:51.889 [2024-11-20 09:09:17.302870] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:51.889 [2024-11-20 09:09:17.302917] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x149d690 0 00:23:51.889 [2024-11-20 09:09:17.313182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:51.889 [2024-11-20 09:09:17.313200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:51.889 [2024-11-20 09:09:17.313205] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:51.889 [2024-11-20 09:09:17.313208] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:51.889 [2024-11-20 09:09:17.313251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.889 [2024-11-20 09:09:17.313257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.889 [2024-11-20 09:09:17.313262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x149d690) 00:23:51.889 [2024-11-20 09:09:17.313279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:51.889 [2024-11-20 09:09:17.313304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff100, cid 0, qid 0 00:23:51.889 [2024-11-20 09:09:17.321173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.889 [2024-11-20 09:09:17.321184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.889 [2024-11-20 09:09:17.321188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.889 [2024-11-20 09:09:17.321193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff100) on tqpair=0x149d690 00:23:51.889 [2024-11-20 09:09:17.321203] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:51.889 [2024-11-20 09:09:17.321212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:51.889 [2024-11-20 09:09:17.321217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:51.889 [2024-11-20 09:09:17.321233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.321237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.321240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x149d690) 00:23:51.890 [2024-11-20 09:09:17.321249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.890 [2024-11-20 09:09:17.321265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff100, cid 0, qid 0 00:23:51.890 [2024-11-20 09:09:17.321460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.890 [2024-11-20 09:09:17.321467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.890 [2024-11-20 09:09:17.321470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.321474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff100) on tqpair=0x149d690 00:23:51.890 [2024-11-20 09:09:17.321480] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:51.890 [2024-11-20 09:09:17.321488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:51.890 [2024-11-20 09:09:17.321495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.321499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.321503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x149d690) 00:23:51.890 [2024-11-20 09:09:17.321510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.890 [2024-11-20 09:09:17.321520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff100, cid 0, qid 0 00:23:51.890 [2024-11-20 09:09:17.321735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.890 [2024-11-20 09:09:17.321741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.890 [2024-11-20 09:09:17.321745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.321749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff100) on tqpair=0x149d690 00:23:51.890 [2024-11-20 09:09:17.321759] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:51.890 [2024-11-20 09:09:17.321769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:51.890 [2024-11-20 09:09:17.321776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.321779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.321783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x149d690) 00:23:51.890 [2024-11-20 09:09:17.321790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.890 [2024-11-20 09:09:17.321801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff100, cid 0, qid 0 00:23:51.890 [2024-11-20 09:09:17.321989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.890 [2024-11-20 09:09:17.321995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.890 [2024-11-20 09:09:17.321999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.322002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff100) on tqpair=0x149d690 00:23:51.890 [2024-11-20 09:09:17.322008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:51.890 [2024-11-20 09:09:17.322017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.322021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.322025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x149d690) 00:23:51.890 [2024-11-20 09:09:17.322032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.890 [2024-11-20 09:09:17.322042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff100, cid 0, qid 0 00:23:51.890 [2024-11-20 09:09:17.322238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.890 [2024-11-20 09:09:17.322245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.890 [2024-11-20 09:09:17.322249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.322252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff100) on tqpair=0x149d690 00:23:51.890 [2024-11-20 09:09:17.322257] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:51.890 [2024-11-20 09:09:17.322262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:51.890 [2024-11-20 09:09:17.322270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:51.890 [2024-11-20 09:09:17.322382] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:51.890 [2024-11-20 09:09:17.322387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:51.890 [2024-11-20 09:09:17.322396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.322400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.322403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x149d690) 00:23:51.890 [2024-11-20 09:09:17.322410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.890 [2024-11-20 09:09:17.322421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff100, cid 0, qid 0 00:23:51.890 [2024-11-20 09:09:17.322619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.890 [2024-11-20 09:09:17.322628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.890 [2024-11-20 09:09:17.322632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.322636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff100) on tqpair=0x149d690 00:23:51.890 [2024-11-20 09:09:17.322640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:51.890 [2024-11-20 09:09:17.322650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.322654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.322658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x149d690) 00:23:51.890 [2024-11-20 09:09:17.322664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.890 [2024-11-20 09:09:17.322675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff100, cid 0, qid 0 00:23:51.890 [2024-11-20 09:09:17.322847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.890 [2024-11-20 09:09:17.322853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.890 [2024-11-20 09:09:17.322856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.322860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff100) on tqpair=0x149d690 00:23:51.890 [2024-11-20 09:09:17.322865] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:51.890 [2024-11-20 09:09:17.322870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:51.890 [2024-11-20 09:09:17.322878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:51.890 [2024-11-20 09:09:17.322887] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:51.890 [2024-11-20 09:09:17.322896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.322900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x149d690) 00:23:51.890 [2024-11-20 09:09:17.322907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.890 [2024-11-20 09:09:17.322918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff100, cid 0, qid 0 00:23:51.890 [2024-11-20 09:09:17.323136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.890 [2024-11-20 09:09:17.323143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.890 [2024-11-20 09:09:17.323147] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.323151] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x149d690): datao=0, datal=4096, cccid=0 00:23:51.890 [2024-11-20 09:09:17.323156] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14ff100) on tqpair(0x149d690): expected_datao=0, payload_size=4096 00:23:51.890 [2024-11-20 09:09:17.323168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.323176] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.323181] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.323308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.890 [2024-11-20 09:09:17.323314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.890 [2024-11-20 09:09:17.323318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.323322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff100) on tqpair=0x149d690 00:23:51.890 [2024-11-20 09:09:17.323333] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:51.890 [2024-11-20 09:09:17.323338] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:51.890 [2024-11-20 09:09:17.323343] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:51.890 [2024-11-20 09:09:17.323352] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:51.890 [2024-11-20 09:09:17.323357] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:51.890 [2024-11-20 09:09:17.323362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:51.890 [2024-11-20 09:09:17.323373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:51.890 [2024-11-20 09:09:17.323381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.323385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.890 [2024-11-20 09:09:17.323388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x149d690) 00:23:51.890 [2024-11-20 09:09:17.323396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.890 [2024-11-20 09:09:17.323407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff100, cid 0, qid 0 00:23:51.890 [2024-11-20 09:09:17.323588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.891 [2024-11-20 09:09:17.323594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.891 [2024-11-20 09:09:17.323598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.323602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff100) on tqpair=0x149d690 00:23:51.891 [2024-11-20 09:09:17.323609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.323613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.323617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x149d690) 00:23:51.891 [2024-11-20 09:09:17.323623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.891 [2024-11-20 09:09:17.323630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.323633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.323637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x149d690) 00:23:51.891 [2024-11-20 09:09:17.323643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.891 [2024-11-20 09:09:17.323649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.323653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.323657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x149d690) 00:23:51.891 [2024-11-20 09:09:17.323662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.891 [2024-11-20 09:09:17.323668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.323672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.323676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.891 [2024-11-20 09:09:17.323681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.891 [2024-11-20 09:09:17.323686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:51.891 [2024-11-20 09:09:17.323697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:51.891 [2024-11-20 09:09:17.323704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.323708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x149d690) 00:23:51.891 [2024-11-20 09:09:17.323715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.891 [2024-11-20 09:09:17.323727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff100, cid 0, qid 0 00:23:51.891 [2024-11-20 09:09:17.323732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff280, cid 1, qid 0 00:23:51.891 [2024-11-20 09:09:17.323737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff400, cid 2, qid 0 00:23:51.891 [2024-11-20 09:09:17.323742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.891 [2024-11-20 09:09:17.323746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff700, cid 4, qid 0 00:23:51.891 [2024-11-20 09:09:17.323971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.891 [2024-11-20 09:09:17.323977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.891 [2024-11-20 09:09:17.323981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.323985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff700) on tqpair=0x149d690 00:23:51.891 [2024-11-20 09:09:17.323993] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:51.891 [2024-11-20 09:09:17.323998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:51.891 [2024-11-20 09:09:17.324009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x149d690) 00:23:51.891 [2024-11-20 09:09:17.324019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.891 [2024-11-20 09:09:17.324031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff700, cid 4, qid 0 00:23:51.891 [2024-11-20 09:09:17.324234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.891 [2024-11-20 09:09:17.324241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.891 [2024-11-20 09:09:17.324245] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324248] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x149d690): datao=0, datal=4096, cccid=4 00:23:51.891 [2024-11-20 09:09:17.324253] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14ff700) on tqpair(0x149d690): expected_datao=0, payload_size=4096 00:23:51.891 [2024-11-20 09:09:17.324258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324264] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324268] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.891 [2024-11-20 09:09:17.324428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.891 [2024-11-20 09:09:17.324431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff700) on tqpair=0x149d690 00:23:51.891 [2024-11-20 09:09:17.324448] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:51.891 [2024-11-20 09:09:17.324475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x149d690) 00:23:51.891 [2024-11-20 09:09:17.324489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.891 [2024-11-20 09:09:17.324497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x149d690) 00:23:51.891 [2024-11-20 09:09:17.324510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.891 [2024-11-20 09:09:17.324525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff700, cid 4, qid 0 00:23:51.891 [2024-11-20 09:09:17.324530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff880, cid 5, qid 0 00:23:51.891 [2024-11-20 09:09:17.324777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.891 [2024-11-20 09:09:17.324784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.891 [2024-11-20 09:09:17.324787] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324791] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x149d690): datao=0, datal=1024, cccid=4 00:23:51.891 [2024-11-20 09:09:17.324796] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14ff700) on tqpair(0x149d690): expected_datao=0, payload_size=1024 00:23:51.891 [2024-11-20 09:09:17.324800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324807] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324810] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.891 [2024-11-20 09:09:17.324822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.891 [2024-11-20 09:09:17.324825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.324829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff880) on tqpair=0x149d690 00:23:51.891 [2024-11-20 09:09:17.366169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.891 [2024-11-20 09:09:17.366181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.891 [2024-11-20 09:09:17.366186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.366190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff700) on tqpair=0x149d690 00:23:51.891 [2024-11-20 09:09:17.366205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.366209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x149d690) 00:23:51.891 [2024-11-20 09:09:17.366217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.891 [2024-11-20 09:09:17.366235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff700, cid 4, qid 0 00:23:51.891 [2024-11-20 09:09:17.366424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.891 [2024-11-20 09:09:17.366431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.891 [2024-11-20 09:09:17.366435] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.366439] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x149d690): datao=0, datal=3072, cccid=4 00:23:51.891 [2024-11-20 09:09:17.366444] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14ff700) on tqpair(0x149d690): expected_datao=0, payload_size=3072 00:23:51.891 [2024-11-20 09:09:17.366448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.366455] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.366459] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.366649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.891 [2024-11-20 09:09:17.366660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.891 [2024-11-20 09:09:17.366664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.366668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff700) on tqpair=0x149d690 00:23:51.891 [2024-11-20 09:09:17.366677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.366681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x149d690) 00:23:51.891 [2024-11-20 09:09:17.366688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.891 [2024-11-20 09:09:17.366702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff700, cid 4, qid 0 00:23:51.891 [2024-11-20 09:09:17.366924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.891 [2024-11-20 09:09:17.366930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.891 [2024-11-20 09:09:17.366934] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.891 [2024-11-20 09:09:17.366937] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x149d690): datao=0, datal=8, cccid=4 00:23:51.892 [2024-11-20 09:09:17.366942] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14ff700) on tqpair(0x149d690): expected_datao=0, payload_size=8 00:23:51.892 [2024-11-20 09:09:17.366946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.892 [2024-11-20 09:09:17.366953] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.892 [2024-11-20 09:09:17.366956] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.892 [2024-11-20 09:09:17.407337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.892 [2024-11-20 09:09:17.407349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.892 [2024-11-20 09:09:17.407353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.892 [2024-11-20 09:09:17.407358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff700) on tqpair=0x149d690 00:23:51.892 ===================================================== 00:23:51.892 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:51.892 ===================================================== 00:23:51.892 Controller Capabilities/Features 00:23:51.892 ================================ 00:23:51.892 Vendor ID: 0000 00:23:51.892 Subsystem Vendor ID: 0000 00:23:51.892 Serial Number: .................... 00:23:51.892 Model Number: ........................................ 00:23:51.892 Firmware Version: 25.01 00:23:51.892 Recommended Arb Burst: 0 00:23:51.892 IEEE OUI Identifier: 00 00 00 00:23:51.892 Multi-path I/O 00:23:51.892 May have multiple subsystem ports: No 00:23:51.892 May have multiple controllers: No 00:23:51.892 Associated with SR-IOV VF: No 00:23:51.892 Max Data Transfer Size: 131072 00:23:51.892 Max Number of Namespaces: 0 00:23:51.892 Max Number of I/O Queues: 1024 00:23:51.892 NVMe Specification Version (VS): 1.3 00:23:51.892 NVMe Specification Version (Identify): 1.3 00:23:51.892 Maximum Queue Entries: 128 00:23:51.892 Contiguous Queues Required: Yes 00:23:51.892 Arbitration Mechanisms Supported 00:23:51.892 Weighted Round Robin: Not Supported 00:23:51.892 Vendor Specific: Not Supported 00:23:51.892 Reset Timeout: 15000 ms 00:23:51.892 Doorbell Stride: 4 bytes 00:23:51.892 NVM Subsystem Reset: Not Supported 00:23:51.892 Command Sets Supported 00:23:51.892 NVM Command Set: Supported 00:23:51.892 Boot Partition: Not Supported 00:23:51.892 Memory Page Size Minimum: 4096 bytes 00:23:51.892 Memory Page Size Maximum: 4096 bytes 00:23:51.892 Persistent Memory Region: Not Supported 00:23:51.892 Optional Asynchronous Events Supported 00:23:51.892 Namespace Attribute Notices: Not Supported 00:23:51.892 Firmware Activation Notices: Not Supported 00:23:51.892 ANA Change Notices: Not Supported 00:23:51.892 PLE Aggregate Log Change Notices: Not Supported 00:23:51.892 LBA Status Info Alert Notices: Not Supported 00:23:51.892 EGE Aggregate Log Change Notices: Not Supported 00:23:51.892 Normal NVM Subsystem Shutdown event: Not Supported 00:23:51.892 Zone Descriptor Change Notices: Not Supported 00:23:51.892 Discovery Log Change Notices: Supported 00:23:51.892 Controller Attributes 00:23:51.892 128-bit Host Identifier: Not Supported 00:23:51.892 Non-Operational Permissive Mode: Not Supported 00:23:51.892 NVM Sets: Not Supported 00:23:51.892 Read Recovery Levels: Not Supported 00:23:51.892 Endurance Groups: Not Supported 00:23:51.892 Predictable Latency Mode: Not Supported 00:23:51.892 Traffic Based Keep ALive: Not Supported 00:23:51.892 Namespace Granularity: Not Supported 00:23:51.892 SQ Associations: Not Supported 00:23:51.892 UUID List: Not Supported 00:23:51.892 Multi-Domain Subsystem: Not Supported 00:23:51.892 Fixed Capacity Management: Not Supported 00:23:51.892 Variable Capacity Management: Not Supported 00:23:51.892 Delete Endurance Group: Not Supported 00:23:51.892 Delete NVM Set: Not Supported 00:23:51.892 Extended LBA Formats Supported: Not Supported 00:23:51.892 Flexible Data Placement Supported: Not Supported 00:23:51.892 00:23:51.892 Controller Memory Buffer Support 00:23:51.892 ================================ 00:23:51.892 Supported: No 00:23:51.892 00:23:51.892 Persistent Memory Region Support 00:23:51.892 ================================ 00:23:51.892 Supported: No 00:23:51.892 00:23:51.892 Admin Command Set Attributes 00:23:51.892 ============================ 00:23:51.892 Security Send/Receive: Not Supported 00:23:51.892 Format NVM: Not Supported 00:23:51.892 Firmware Activate/Download: Not Supported 00:23:51.892 Namespace Management: Not Supported 00:23:51.892 Device Self-Test: Not Supported 00:23:51.892 Directives: Not Supported 00:23:51.892 NVMe-MI: Not Supported 00:23:51.892 Virtualization Management: Not Supported 00:23:51.892 Doorbell Buffer Config: Not Supported 00:23:51.892 Get LBA Status Capability: Not Supported 00:23:51.892 Command & Feature Lockdown Capability: Not Supported 00:23:51.892 Abort Command Limit: 1 00:23:51.892 Async Event Request Limit: 4 00:23:51.892 Number of Firmware Slots: N/A 00:23:51.892 Firmware Slot 1 Read-Only: N/A 00:23:51.892 Firmware Activation Without Reset: N/A 00:23:51.892 Multiple Update Detection Support: N/A 00:23:51.892 Firmware Update Granularity: No Information Provided 00:23:51.892 Per-Namespace SMART Log: No 00:23:51.892 Asymmetric Namespace Access Log Page: Not Supported 00:23:51.892 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:51.892 Command Effects Log Page: Not Supported 00:23:51.892 Get Log Page Extended Data: Supported 00:23:51.892 Telemetry Log Pages: Not Supported 00:23:51.892 Persistent Event Log Pages: Not Supported 00:23:51.892 Supported Log Pages Log Page: May Support 00:23:51.892 Commands Supported & Effects Log Page: Not Supported 00:23:51.892 Feature Identifiers & Effects Log Page:May Support 00:23:51.892 NVMe-MI Commands & Effects Log Page: May Support 00:23:51.892 Data Area 4 for Telemetry Log: Not Supported 00:23:51.892 Error Log Page Entries Supported: 128 00:23:51.892 Keep Alive: Not Supported 00:23:51.892 00:23:51.892 NVM Command Set Attributes 00:23:51.892 ========================== 00:23:51.892 Submission Queue Entry Size 00:23:51.892 Max: 1 00:23:51.892 Min: 1 00:23:51.892 Completion Queue Entry Size 00:23:51.892 Max: 1 00:23:51.892 Min: 1 00:23:51.892 Number of Namespaces: 0 00:23:51.892 Compare Command: Not Supported 00:23:51.892 Write Uncorrectable Command: Not Supported 00:23:51.892 Dataset Management Command: Not Supported 00:23:51.892 Write Zeroes Command: Not Supported 00:23:51.892 Set Features Save Field: Not Supported 00:23:51.892 Reservations: Not Supported 00:23:51.892 Timestamp: Not Supported 00:23:51.892 Copy: Not Supported 00:23:51.892 Volatile Write Cache: Not Present 00:23:51.892 Atomic Write Unit (Normal): 1 00:23:51.892 Atomic Write Unit (PFail): 1 00:23:51.892 Atomic Compare & Write Unit: 1 00:23:51.892 Fused Compare & Write: Supported 00:23:51.892 Scatter-Gather List 00:23:51.892 SGL Command Set: Supported 00:23:51.892 SGL Keyed: Supported 00:23:51.892 SGL Bit Bucket Descriptor: Not Supported 00:23:51.892 SGL Metadata Pointer: Not Supported 00:23:51.892 Oversized SGL: Not Supported 00:23:51.892 SGL Metadata Address: Not Supported 00:23:51.892 SGL Offset: Supported 00:23:51.892 Transport SGL Data Block: Not Supported 00:23:51.892 Replay Protected Memory Block: Not Supported 00:23:51.892 00:23:51.892 Firmware Slot Information 00:23:51.892 ========================= 00:23:51.892 Active slot: 0 00:23:51.892 00:23:51.892 00:23:51.892 Error Log 00:23:51.892 ========= 00:23:51.892 00:23:51.892 Active Namespaces 00:23:51.892 ================= 00:23:51.892 Discovery Log Page 00:23:51.892 ================== 00:23:51.892 Generation Counter: 2 00:23:51.892 Number of Records: 2 00:23:51.892 Record Format: 0 00:23:51.892 00:23:51.892 Discovery Log Entry 0 00:23:51.892 ---------------------- 00:23:51.892 Transport Type: 3 (TCP) 00:23:51.892 Address Family: 1 (IPv4) 00:23:51.892 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:51.892 Entry Flags: 00:23:51.892 Duplicate Returned Information: 1 00:23:51.892 Explicit Persistent Connection Support for Discovery: 1 00:23:51.892 Transport Requirements: 00:23:51.892 Secure Channel: Not Required 00:23:51.892 Port ID: 0 (0x0000) 00:23:51.892 Controller ID: 65535 (0xffff) 00:23:51.892 Admin Max SQ Size: 128 00:23:51.892 Transport Service Identifier: 4420 00:23:51.892 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:51.892 Transport Address: 10.0.0.2 00:23:51.892 Discovery Log Entry 1 00:23:51.892 ---------------------- 00:23:51.892 Transport Type: 3 (TCP) 00:23:51.892 Address Family: 1 (IPv4) 00:23:51.892 Subsystem Type: 2 (NVM Subsystem) 00:23:51.892 Entry Flags: 00:23:51.892 Duplicate Returned Information: 0 00:23:51.892 Explicit Persistent Connection Support for Discovery: 0 00:23:51.892 Transport Requirements: 00:23:51.892 Secure Channel: Not Required 00:23:51.893 Port ID: 0 (0x0000) 00:23:51.893 Controller ID: 65535 (0xffff) 00:23:51.893 Admin Max SQ Size: 128 00:23:51.893 Transport Service Identifier: 4420 00:23:51.893 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:51.893 Transport Address: 10.0.0.2 [2024-11-20 09:09:17.407464] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:51.893 [2024-11-20 09:09:17.407476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff100) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.407484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.893 [2024-11-20 09:09:17.407490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff280) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.407494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.893 [2024-11-20 09:09:17.407499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff400) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.407504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.893 [2024-11-20 09:09:17.407509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.407514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.893 [2024-11-20 09:09:17.407526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.407530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.407534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.893 [2024-11-20 09:09:17.407543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.893 [2024-11-20 09:09:17.407558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.893 [2024-11-20 09:09:17.407656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.893 [2024-11-20 09:09:17.407665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.893 [2024-11-20 09:09:17.407669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.407673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.407680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.407684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.407688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.893 [2024-11-20 09:09:17.407694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.893 [2024-11-20 09:09:17.407709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.893 [2024-11-20 09:09:17.407944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.893 [2024-11-20 09:09:17.407950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.893 [2024-11-20 09:09:17.407954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.407958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.407963] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:51.893 [2024-11-20 09:09:17.407968] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:51.893 [2024-11-20 09:09:17.407977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.407981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.407984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.893 [2024-11-20 09:09:17.407991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.893 [2024-11-20 09:09:17.408001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.893 [2024-11-20 09:09:17.408193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.893 [2024-11-20 09:09:17.408200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.893 [2024-11-20 09:09:17.408204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.408218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.893 [2024-11-20 09:09:17.408232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.893 [2024-11-20 09:09:17.408243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.893 [2024-11-20 09:09:17.408426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.893 [2024-11-20 09:09:17.408433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.893 [2024-11-20 09:09:17.408436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.408450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.893 [2024-11-20 09:09:17.408464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.893 [2024-11-20 09:09:17.408477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.893 [2024-11-20 09:09:17.408651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.893 [2024-11-20 09:09:17.408658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.893 [2024-11-20 09:09:17.408661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.408675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.893 [2024-11-20 09:09:17.408689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.893 [2024-11-20 09:09:17.408699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.893 [2024-11-20 09:09:17.408887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.893 [2024-11-20 09:09:17.408894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.893 [2024-11-20 09:09:17.408897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.408911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.408918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.893 [2024-11-20 09:09:17.408925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.893 [2024-11-20 09:09:17.408936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.893 [2024-11-20 09:09:17.409105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.893 [2024-11-20 09:09:17.409111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.893 [2024-11-20 09:09:17.409115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.409119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.409128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.409132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.409136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.893 [2024-11-20 09:09:17.409142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.893 [2024-11-20 09:09:17.409153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.893 [2024-11-20 09:09:17.409343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.893 [2024-11-20 09:09:17.409350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.893 [2024-11-20 09:09:17.409354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.409357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.409367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.409371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.409375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.893 [2024-11-20 09:09:17.409382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.893 [2024-11-20 09:09:17.409392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.893 [2024-11-20 09:09:17.409575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.893 [2024-11-20 09:09:17.409582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.893 [2024-11-20 09:09:17.409585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.409589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.893 [2024-11-20 09:09:17.409599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.893 [2024-11-20 09:09:17.409602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.409606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.894 [2024-11-20 09:09:17.409613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.894 [2024-11-20 09:09:17.409623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.894 [2024-11-20 09:09:17.409853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.894 [2024-11-20 09:09:17.409860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.894 [2024-11-20 09:09:17.409863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.409867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.894 [2024-11-20 09:09:17.409878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.409882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.409885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.894 [2024-11-20 09:09:17.409892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.894 [2024-11-20 09:09:17.409902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.894 [2024-11-20 09:09:17.410085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.894 [2024-11-20 09:09:17.410092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.894 [2024-11-20 09:09:17.410095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.894 [2024-11-20 09:09:17.410109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.894 [2024-11-20 09:09:17.410123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.894 [2024-11-20 09:09:17.410133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.894 [2024-11-20 09:09:17.410355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.894 [2024-11-20 09:09:17.410362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.894 [2024-11-20 09:09:17.410366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.894 [2024-11-20 09:09:17.410379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.894 [2024-11-20 09:09:17.410393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.894 [2024-11-20 09:09:17.410403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.894 [2024-11-20 09:09:17.410652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.894 [2024-11-20 09:09:17.410661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.894 [2024-11-20 09:09:17.410665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.894 [2024-11-20 09:09:17.410679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.894 [2024-11-20 09:09:17.410693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.894 [2024-11-20 09:09:17.410704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.894 [2024-11-20 09:09:17.410913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.894 [2024-11-20 09:09:17.410919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.894 [2024-11-20 09:09:17.410922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.894 [2024-11-20 09:09:17.410937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.410945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.894 [2024-11-20 09:09:17.410951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.894 [2024-11-20 09:09:17.410961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.894 [2024-11-20 09:09:17.411150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.894 [2024-11-20 09:09:17.411157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.894 [2024-11-20 09:09:17.411167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.894 [2024-11-20 09:09:17.411180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.894 [2024-11-20 09:09:17.411195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.894 [2024-11-20 09:09:17.411205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.894 [2024-11-20 09:09:17.411420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.894 [2024-11-20 09:09:17.411426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.894 [2024-11-20 09:09:17.411429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.894 [2024-11-20 09:09:17.411443] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.894 [2024-11-20 09:09:17.411457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.894 [2024-11-20 09:09:17.411468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.894 [2024-11-20 09:09:17.411682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.894 [2024-11-20 09:09:17.411689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.894 [2024-11-20 09:09:17.411694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.894 [2024-11-20 09:09:17.411708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.894 [2024-11-20 09:09:17.411722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.894 [2024-11-20 09:09:17.411733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:51.894 [2024-11-20 09:09:17.411915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.894 [2024-11-20 09:09:17.411921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.894 [2024-11-20 09:09:17.411924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:51.894 [2024-11-20 09:09:17.411938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.894 [2024-11-20 09:09:17.411945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:51.894 [2024-11-20 09:09:17.411952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.894 [2024-11-20 09:09:17.411962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:52.161 [2024-11-20 09:09:17.412136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.161 [2024-11-20 09:09:17.412146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.161 [2024-11-20 09:09:17.412151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:52.161 [2024-11-20 09:09:17.412176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:52.161 [2024-11-20 09:09:17.412195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.161 [2024-11-20 09:09:17.412208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:52.161 [2024-11-20 09:09:17.412403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.161 [2024-11-20 09:09:17.412412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.161 [2024-11-20 09:09:17.412416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:52.161 [2024-11-20 09:09:17.412431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:52.161 [2024-11-20 09:09:17.412445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.161 [2024-11-20 09:09:17.412456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:52.161 [2024-11-20 09:09:17.412633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.161 [2024-11-20 09:09:17.412642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.161 [2024-11-20 09:09:17.412645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:52.161 [2024-11-20 09:09:17.412662] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:52.161 [2024-11-20 09:09:17.412677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.161 [2024-11-20 09:09:17.412688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:52.161 [2024-11-20 09:09:17.412906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.161 [2024-11-20 09:09:17.412912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.161 [2024-11-20 09:09:17.412916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:52.161 [2024-11-20 09:09:17.412929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.412936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:52.161 [2024-11-20 09:09:17.412943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.161 [2024-11-20 09:09:17.412953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:52.161 [2024-11-20 09:09:17.413172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.161 [2024-11-20 09:09:17.413179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.161 [2024-11-20 09:09:17.413182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:52.161 [2024-11-20 09:09:17.413196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:52.161 [2024-11-20 09:09:17.413210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.161 [2024-11-20 09:09:17.413220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:52.161 [2024-11-20 09:09:17.413391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.161 [2024-11-20 09:09:17.413397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.161 [2024-11-20 09:09:17.413401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:52.161 [2024-11-20 09:09:17.413414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:52.161 [2024-11-20 09:09:17.413428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.161 [2024-11-20 09:09:17.413438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:52.161 [2024-11-20 09:09:17.413649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.161 [2024-11-20 09:09:17.413655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.161 [2024-11-20 09:09:17.413659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:52.161 [2024-11-20 09:09:17.413675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:52.161 [2024-11-20 09:09:17.413689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.161 [2024-11-20 09:09:17.413700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:52.161 [2024-11-20 09:09:17.413886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.161 [2024-11-20 09:09:17.413893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.161 [2024-11-20 09:09:17.413896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:52.161 [2024-11-20 09:09:17.413910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.413917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:52.161 [2024-11-20 09:09:17.413924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.161 [2024-11-20 09:09:17.413934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:52.161 [2024-11-20 09:09:17.414138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.161 [2024-11-20 09:09:17.414144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.161 [2024-11-20 09:09:17.414147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.414151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:52.161 [2024-11-20 09:09:17.418168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.418175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.161 [2024-11-20 09:09:17.418178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x149d690) 00:23:52.162 [2024-11-20 09:09:17.418185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.162 [2024-11-20 09:09:17.418197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ff580, cid 3, qid 0 00:23:52.162 [2024-11-20 09:09:17.418382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.162 [2024-11-20 09:09:17.418389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.162 [2024-11-20 09:09:17.418392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.418396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ff580) on tqpair=0x149d690 00:23:52.162 [2024-11-20 09:09:17.418404] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 10 milliseconds 00:23:52.162 00:23:52.162 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:52.162 [2024-11-20 09:09:17.470538] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:23:52.162 [2024-11-20 09:09:17.470583] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787643 ] 00:23:52.162 [2024-11-20 09:09:17.525786] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:52.162 [2024-11-20 09:09:17.525849] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:52.162 [2024-11-20 09:09:17.525856] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:52.162 [2024-11-20 09:09:17.525873] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:52.162 [2024-11-20 09:09:17.525886] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:52.162 [2024-11-20 09:09:17.529487] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:52.162 [2024-11-20 09:09:17.529527] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1466690 0 00:23:52.162 [2024-11-20 09:09:17.537178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:52.162 [2024-11-20 09:09:17.537194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:52.162 [2024-11-20 09:09:17.537199] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:52.162 [2024-11-20 09:09:17.537203] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:52.162 [2024-11-20 09:09:17.537239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.537245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.537249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466690) 00:23:52.162 [2024-11-20 09:09:17.537263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:52.162 [2024-11-20 09:09:17.537288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8100, cid 0, qid 0 00:23:52.162 [2024-11-20 09:09:17.545174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.162 [2024-11-20 09:09:17.545187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.162 [2024-11-20 09:09:17.545191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.545196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8100) on tqpair=0x1466690 00:23:52.162 [2024-11-20 09:09:17.545208] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:52.162 [2024-11-20 09:09:17.545217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:52.162 [2024-11-20 09:09:17.545222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:52.162 [2024-11-20 09:09:17.545237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.545241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.545245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466690) 00:23:52.162 [2024-11-20 09:09:17.545254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.162 [2024-11-20 09:09:17.545270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8100, cid 0, qid 0 00:23:52.162 [2024-11-20 09:09:17.545488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.162 [2024-11-20 09:09:17.545496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.162 [2024-11-20 09:09:17.545499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.545504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8100) on tqpair=0x1466690 00:23:52.162 [2024-11-20 09:09:17.545514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:52.162 [2024-11-20 09:09:17.545522] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:52.162 [2024-11-20 09:09:17.545529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.545538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.545542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466690) 00:23:52.162 [2024-11-20 09:09:17.545551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.162 [2024-11-20 09:09:17.545562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8100, cid 0, qid 0 00:23:52.162 [2024-11-20 09:09:17.545759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.162 [2024-11-20 09:09:17.545767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.162 [2024-11-20 09:09:17.545770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.545774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8100) on tqpair=0x1466690 00:23:52.162 [2024-11-20 09:09:17.545780] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:52.162 [2024-11-20 09:09:17.545788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:52.162 [2024-11-20 09:09:17.545795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.545798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.545802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466690) 00:23:52.162 [2024-11-20 09:09:17.545811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.162 [2024-11-20 09:09:17.545822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8100, cid 0, qid 0 00:23:52.162 [2024-11-20 09:09:17.546034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.162 [2024-11-20 09:09:17.546044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.162 [2024-11-20 09:09:17.546047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.546051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8100) on tqpair=0x1466690 00:23:52.162 [2024-11-20 09:09:17.546056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:52.162 [2024-11-20 09:09:17.546066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.546069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.546073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466690) 00:23:52.162 [2024-11-20 09:09:17.546080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.162 [2024-11-20 09:09:17.546090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8100, cid 0, qid 0 00:23:52.162 [2024-11-20 09:09:17.546304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.162 [2024-11-20 09:09:17.546312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.162 [2024-11-20 09:09:17.546316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.546320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8100) on tqpair=0x1466690 00:23:52.162 [2024-11-20 09:09:17.546324] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:52.162 [2024-11-20 09:09:17.546329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:52.162 [2024-11-20 09:09:17.546337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:52.162 [2024-11-20 09:09:17.546446] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:52.162 [2024-11-20 09:09:17.546451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:52.162 [2024-11-20 09:09:17.546462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.162 [2024-11-20 09:09:17.546466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.546469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466690) 00:23:52.163 [2024-11-20 09:09:17.546476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.163 [2024-11-20 09:09:17.546487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8100, cid 0, qid 0 00:23:52.163 [2024-11-20 09:09:17.546703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.163 [2024-11-20 09:09:17.546710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.163 [2024-11-20 09:09:17.546714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.546717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8100) on tqpair=0x1466690 00:23:52.163 [2024-11-20 09:09:17.546722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:52.163 [2024-11-20 09:09:17.546732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.546736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.546739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466690) 00:23:52.163 [2024-11-20 09:09:17.546746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.163 [2024-11-20 09:09:17.546756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8100, cid 0, qid 0 00:23:52.163 [2024-11-20 09:09:17.546936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.163 [2024-11-20 09:09:17.546942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.163 [2024-11-20 09:09:17.546946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.546950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8100) on tqpair=0x1466690 00:23:52.163 [2024-11-20 09:09:17.546954] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:52.163 [2024-11-20 09:09:17.546959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:52.163 [2024-11-20 09:09:17.546967] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:52.163 [2024-11-20 09:09:17.546975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:52.163 [2024-11-20 09:09:17.546985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.546989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466690) 00:23:52.163 [2024-11-20 09:09:17.546996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.163 [2024-11-20 09:09:17.547006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8100, cid 0, qid 0 00:23:52.163 [2024-11-20 09:09:17.547236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.163 [2024-11-20 09:09:17.547244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.163 [2024-11-20 09:09:17.547248] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466690): datao=0, datal=4096, cccid=0 00:23:52.163 [2024-11-20 09:09:17.547258] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c8100) on tqpair(0x1466690): expected_datao=0, payload_size=4096 00:23:52.163 [2024-11-20 09:09:17.547266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547274] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547278] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.163 [2024-11-20 09:09:17.547437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.163 [2024-11-20 09:09:17.547441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8100) on tqpair=0x1466690 00:23:52.163 [2024-11-20 09:09:17.547453] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:52.163 [2024-11-20 09:09:17.547457] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:52.163 [2024-11-20 09:09:17.547462] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:52.163 [2024-11-20 09:09:17.547472] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:52.163 [2024-11-20 09:09:17.547477] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:52.163 [2024-11-20 09:09:17.547482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:52.163 [2024-11-20 09:09:17.547493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:52.163 [2024-11-20 09:09:17.547500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466690) 00:23:52.163 [2024-11-20 09:09:17.547515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:52.163 [2024-11-20 09:09:17.547526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8100, cid 0, qid 0 00:23:52.163 [2024-11-20 09:09:17.547750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.163 [2024-11-20 09:09:17.547757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.163 [2024-11-20 09:09:17.547760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8100) on tqpair=0x1466690 00:23:52.163 [2024-11-20 09:09:17.547771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466690) 00:23:52.163 [2024-11-20 09:09:17.547784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.163 [2024-11-20 09:09:17.547791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1466690) 00:23:52.163 [2024-11-20 09:09:17.547804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.163 [2024-11-20 09:09:17.547810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1466690) 00:23:52.163 [2024-11-20 09:09:17.547823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.163 [2024-11-20 09:09:17.547841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466690) 00:23:52.163 [2024-11-20 09:09:17.547854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.163 [2024-11-20 09:09:17.547859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:52.163 [2024-11-20 09:09:17.547867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:52.163 [2024-11-20 09:09:17.547874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.163 [2024-11-20 09:09:17.547878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466690) 00:23:52.163 [2024-11-20 09:09:17.547884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.163 [2024-11-20 09:09:17.547896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8100, cid 0, qid 0 00:23:52.163 [2024-11-20 09:09:17.547901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8280, cid 1, qid 0 00:23:52.164 [2024-11-20 09:09:17.547906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8400, cid 2, qid 0 00:23:52.164 [2024-11-20 09:09:17.547911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8580, cid 3, qid 0 00:23:52.164 [2024-11-20 09:09:17.547916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8700, cid 4, qid 0 00:23:52.164 [2024-11-20 09:09:17.548169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.164 [2024-11-20 09:09:17.548176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.164 [2024-11-20 09:09:17.548180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.548184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8700) on tqpair=0x1466690 00:23:52.164 [2024-11-20 09:09:17.548192] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:52.164 [2024-11-20 09:09:17.548197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.548206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.548212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.548219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.548223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.548226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466690) 00:23:52.164 [2024-11-20 09:09:17.548233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:52.164 [2024-11-20 09:09:17.548243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8700, cid 4, qid 0 00:23:52.164 [2024-11-20 09:09:17.548460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.164 [2024-11-20 09:09:17.548467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.164 [2024-11-20 09:09:17.548471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.548474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8700) on tqpair=0x1466690 00:23:52.164 [2024-11-20 09:09:17.548542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.548555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.548562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.548566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466690) 00:23:52.164 [2024-11-20 09:09:17.548572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.164 [2024-11-20 09:09:17.548584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8700, cid 4, qid 0 00:23:52.164 [2024-11-20 09:09:17.548765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.164 [2024-11-20 09:09:17.548771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.164 [2024-11-20 09:09:17.548774] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.548778] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466690): datao=0, datal=4096, cccid=4 00:23:52.164 [2024-11-20 09:09:17.548782] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c8700) on tqpair(0x1466690): expected_datao=0, payload_size=4096 00:23:52.164 [2024-11-20 09:09:17.548787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.548805] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.548809] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.548954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.164 [2024-11-20 09:09:17.548960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.164 [2024-11-20 09:09:17.548963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.548967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8700) on tqpair=0x1466690 00:23:52.164 [2024-11-20 09:09:17.548976] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:52.164 [2024-11-20 09:09:17.548986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.548995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.549002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.549008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466690) 00:23:52.164 [2024-11-20 09:09:17.549014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.164 [2024-11-20 09:09:17.549026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8700, cid 4, qid 0 00:23:52.164 [2024-11-20 09:09:17.553172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.164 [2024-11-20 09:09:17.553183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.164 [2024-11-20 09:09:17.553186] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.553190] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466690): datao=0, datal=4096, cccid=4 00:23:52.164 [2024-11-20 09:09:17.553194] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c8700) on tqpair(0x1466690): expected_datao=0, payload_size=4096 00:23:52.164 [2024-11-20 09:09:17.553199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.553205] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.553209] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.553215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.164 [2024-11-20 09:09:17.553221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.164 [2024-11-20 09:09:17.553227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.553231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8700) on tqpair=0x1466690 00:23:52.164 [2024-11-20 09:09:17.553246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.553256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.553263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.553267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466690) 00:23:52.164 [2024-11-20 09:09:17.553273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.164 [2024-11-20 09:09:17.553286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8700, cid 4, qid 0 00:23:52.164 [2024-11-20 09:09:17.553488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.164 [2024-11-20 09:09:17.553495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.164 [2024-11-20 09:09:17.553499] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.553502] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466690): datao=0, datal=4096, cccid=4 00:23:52.164 [2024-11-20 09:09:17.553507] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c8700) on tqpair(0x1466690): expected_datao=0, payload_size=4096 00:23:52.164 [2024-11-20 09:09:17.553511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.553518] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.553521] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.553675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.164 [2024-11-20 09:09:17.553683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.164 [2024-11-20 09:09:17.553686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.164 [2024-11-20 09:09:17.553690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8700) on tqpair=0x1466690 00:23:52.164 [2024-11-20 09:09:17.553697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.553705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.553715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.553721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:52.164 [2024-11-20 09:09:17.553727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:52.165 [2024-11-20 09:09:17.553732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:52.165 [2024-11-20 09:09:17.553737] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:52.165 [2024-11-20 09:09:17.553742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:52.165 [2024-11-20 09:09:17.553747] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:52.165 [2024-11-20 09:09:17.553765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.553769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466690) 00:23:52.165 [2024-11-20 09:09:17.553778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.165 [2024-11-20 09:09:17.553786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.553789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.553793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1466690) 00:23:52.165 [2024-11-20 09:09:17.553799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.165 [2024-11-20 09:09:17.553813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8700, cid 4, qid 0 00:23:52.165 [2024-11-20 09:09:17.553819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8880, cid 5, qid 0 00:23:52.165 [2024-11-20 09:09:17.554014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.165 [2024-11-20 09:09:17.554020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.165 [2024-11-20 09:09:17.554023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8700) on tqpair=0x1466690 00:23:52.165 [2024-11-20 09:09:17.554034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.165 [2024-11-20 09:09:17.554040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.165 [2024-11-20 09:09:17.554044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8880) on tqpair=0x1466690 00:23:52.165 [2024-11-20 09:09:17.554057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1466690) 00:23:52.165 [2024-11-20 09:09:17.554067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.165 [2024-11-20 09:09:17.554077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8880, cid 5, qid 0 00:23:52.165 [2024-11-20 09:09:17.554284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.165 [2024-11-20 09:09:17.554292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.165 [2024-11-20 09:09:17.554295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8880) on tqpair=0x1466690 00:23:52.165 [2024-11-20 09:09:17.554309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1466690) 00:23:52.165 [2024-11-20 09:09:17.554320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.165 [2024-11-20 09:09:17.554330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8880, cid 5, qid 0 00:23:52.165 [2024-11-20 09:09:17.554500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.165 [2024-11-20 09:09:17.554507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.165 [2024-11-20 09:09:17.554510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8880) on tqpair=0x1466690 00:23:52.165 [2024-11-20 09:09:17.554523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1466690) 00:23:52.165 [2024-11-20 09:09:17.554534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.165 [2024-11-20 09:09:17.554543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8880, cid 5, qid 0 00:23:52.165 [2024-11-20 09:09:17.554708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.165 [2024-11-20 09:09:17.554715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.165 [2024-11-20 09:09:17.554718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8880) on tqpair=0x1466690 00:23:52.165 [2024-11-20 09:09:17.554738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1466690) 00:23:52.165 [2024-11-20 09:09:17.554748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.165 [2024-11-20 09:09:17.554756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466690) 00:23:52.165 [2024-11-20 09:09:17.554765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.165 [2024-11-20 09:09:17.554773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1466690) 00:23:52.165 [2024-11-20 09:09:17.554783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.165 [2024-11-20 09:09:17.554790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.554794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1466690) 00:23:52.165 [2024-11-20 09:09:17.554800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.165 [2024-11-20 09:09:17.554812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8880, cid 5, qid 0 00:23:52.165 [2024-11-20 09:09:17.554817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8700, cid 4, qid 0 00:23:52.165 [2024-11-20 09:09:17.554822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8a00, cid 6, qid 0 00:23:52.165 [2024-11-20 09:09:17.554826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8b80, cid 7, qid 0 00:23:52.165 [2024-11-20 09:09:17.555123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.165 [2024-11-20 09:09:17.555129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.165 [2024-11-20 09:09:17.555133] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.555137] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466690): datao=0, datal=8192, cccid=5 00:23:52.165 [2024-11-20 09:09:17.555141] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c8880) on tqpair(0x1466690): expected_datao=0, payload_size=8192 00:23:52.165 [2024-11-20 09:09:17.555145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.555224] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.555229] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.555235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.165 [2024-11-20 09:09:17.555240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.165 [2024-11-20 09:09:17.555244] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.555248] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466690): datao=0, datal=512, cccid=4 00:23:52.165 [2024-11-20 09:09:17.555252] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c8700) on tqpair(0x1466690): expected_datao=0, payload_size=512 00:23:52.165 [2024-11-20 09:09:17.555256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.555279] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.555283] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.165 [2024-11-20 09:09:17.555289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.165 [2024-11-20 09:09:17.555295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.165 [2024-11-20 09:09:17.555298] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555302] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466690): datao=0, datal=512, cccid=6 00:23:52.166 [2024-11-20 09:09:17.555306] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c8a00) on tqpair(0x1466690): expected_datao=0, payload_size=512 00:23:52.166 [2024-11-20 09:09:17.555310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555317] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555320] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.166 [2024-11-20 09:09:17.555332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.166 [2024-11-20 09:09:17.555335] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555339] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466690): datao=0, datal=4096, cccid=7 00:23:52.166 [2024-11-20 09:09:17.555343] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14c8b80) on tqpair(0x1466690): expected_datao=0, payload_size=4096 00:23:52.166 [2024-11-20 09:09:17.555347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555354] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555358] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.166 [2024-11-20 09:09:17.555393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.166 [2024-11-20 09:09:17.555396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8880) on tqpair=0x1466690 00:23:52.166 [2024-11-20 09:09:17.555416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.166 [2024-11-20 09:09:17.555422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.166 [2024-11-20 09:09:17.555425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8700) on tqpair=0x1466690 00:23:52.166 [2024-11-20 09:09:17.555441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.166 [2024-11-20 09:09:17.555447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.166 [2024-11-20 09:09:17.555450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8a00) on tqpair=0x1466690 00:23:52.166 [2024-11-20 09:09:17.555461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.166 [2024-11-20 09:09:17.555467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.166 [2024-11-20 09:09:17.555470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.166 [2024-11-20 09:09:17.555474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8b80) on tqpair=0x1466690 00:23:52.166 ===================================================== 00:23:52.166 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:52.166 ===================================================== 00:23:52.166 Controller Capabilities/Features 00:23:52.166 ================================ 00:23:52.166 Vendor ID: 8086 00:23:52.166 Subsystem Vendor ID: 8086 00:23:52.166 Serial Number: SPDK00000000000001 00:23:52.166 Model Number: SPDK bdev Controller 00:23:52.166 Firmware Version: 25.01 00:23:52.166 Recommended Arb Burst: 6 00:23:52.166 IEEE OUI Identifier: e4 d2 5c 00:23:52.166 Multi-path I/O 00:23:52.166 May have multiple subsystem ports: Yes 00:23:52.166 May have multiple controllers: Yes 00:23:52.166 Associated with SR-IOV VF: No 00:23:52.166 Max Data Transfer Size: 131072 00:23:52.166 Max Number of Namespaces: 32 00:23:52.166 Max Number of I/O Queues: 127 00:23:52.166 NVMe Specification Version (VS): 1.3 00:23:52.166 NVMe Specification Version (Identify): 1.3 00:23:52.166 Maximum Queue Entries: 128 00:23:52.166 Contiguous Queues Required: Yes 00:23:52.166 Arbitration Mechanisms Supported 00:23:52.166 Weighted Round Robin: Not Supported 00:23:52.166 Vendor Specific: Not Supported 00:23:52.166 Reset Timeout: 15000 ms 00:23:52.166 Doorbell Stride: 4 bytes 00:23:52.166 NVM Subsystem Reset: Not Supported 00:23:52.166 Command Sets Supported 00:23:52.166 NVM Command Set: Supported 00:23:52.166 Boot Partition: Not Supported 00:23:52.166 Memory Page Size Minimum: 4096 bytes 00:23:52.166 Memory Page Size Maximum: 4096 bytes 00:23:52.166 Persistent Memory Region: Not Supported 00:23:52.166 Optional Asynchronous Events Supported 00:23:52.166 Namespace Attribute Notices: Supported 00:23:52.166 Firmware Activation Notices: Not Supported 00:23:52.166 ANA Change Notices: Not Supported 00:23:52.166 PLE Aggregate Log Change Notices: Not Supported 00:23:52.166 LBA Status Info Alert Notices: Not Supported 00:23:52.166 EGE Aggregate Log Change Notices: Not Supported 00:23:52.166 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.166 Zone Descriptor Change Notices: Not Supported 00:23:52.166 Discovery Log Change Notices: Not Supported 00:23:52.166 Controller Attributes 00:23:52.166 128-bit Host Identifier: Supported 00:23:52.166 Non-Operational Permissive Mode: Not Supported 00:23:52.166 NVM Sets: Not Supported 00:23:52.166 Read Recovery Levels: Not Supported 00:23:52.166 Endurance Groups: Not Supported 00:23:52.166 Predictable Latency Mode: Not Supported 00:23:52.166 Traffic Based Keep ALive: Not Supported 00:23:52.166 Namespace Granularity: Not Supported 00:23:52.166 SQ Associations: Not Supported 00:23:52.166 UUID List: Not Supported 00:23:52.166 Multi-Domain Subsystem: Not Supported 00:23:52.166 Fixed Capacity Management: Not Supported 00:23:52.166 Variable Capacity Management: Not Supported 00:23:52.166 Delete Endurance Group: Not Supported 00:23:52.166 Delete NVM Set: Not Supported 00:23:52.166 Extended LBA Formats Supported: Not Supported 00:23:52.166 Flexible Data Placement Supported: Not Supported 00:23:52.166 00:23:52.166 Controller Memory Buffer Support 00:23:52.166 ================================ 00:23:52.166 Supported: No 00:23:52.166 00:23:52.166 Persistent Memory Region Support 00:23:52.166 ================================ 00:23:52.166 Supported: No 00:23:52.166 00:23:52.166 Admin Command Set Attributes 00:23:52.166 ============================ 00:23:52.166 Security Send/Receive: Not Supported 00:23:52.166 Format NVM: Not Supported 00:23:52.166 Firmware Activate/Download: Not Supported 00:23:52.166 Namespace Management: Not Supported 00:23:52.167 Device Self-Test: Not Supported 00:23:52.167 Directives: Not Supported 00:23:52.167 NVMe-MI: Not Supported 00:23:52.167 Virtualization Management: Not Supported 00:23:52.167 Doorbell Buffer Config: Not Supported 00:23:52.167 Get LBA Status Capability: Not Supported 00:23:52.167 Command & Feature Lockdown Capability: Not Supported 00:23:52.167 Abort Command Limit: 4 00:23:52.167 Async Event Request Limit: 4 00:23:52.167 Number of Firmware Slots: N/A 00:23:52.167 Firmware Slot 1 Read-Only: N/A 00:23:52.167 Firmware Activation Without Reset: N/A 00:23:52.167 Multiple Update Detection Support: N/A 00:23:52.167 Firmware Update Granularity: No Information Provided 00:23:52.167 Per-Namespace SMART Log: No 00:23:52.167 Asymmetric Namespace Access Log Page: Not Supported 00:23:52.167 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:52.167 Command Effects Log Page: Supported 00:23:52.167 Get Log Page Extended Data: Supported 00:23:52.167 Telemetry Log Pages: Not Supported 00:23:52.167 Persistent Event Log Pages: Not Supported 00:23:52.167 Supported Log Pages Log Page: May Support 00:23:52.167 Commands Supported & Effects Log Page: Not Supported 00:23:52.167 Feature Identifiers & Effects Log Page:May Support 00:23:52.167 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.167 Data Area 4 for Telemetry Log: Not Supported 00:23:52.167 Error Log Page Entries Supported: 128 00:23:52.167 Keep Alive: Supported 00:23:52.167 Keep Alive Granularity: 10000 ms 00:23:52.167 00:23:52.167 NVM Command Set Attributes 00:23:52.167 ========================== 00:23:52.167 Submission Queue Entry Size 00:23:52.167 Max: 64 00:23:52.167 Min: 64 00:23:52.167 Completion Queue Entry Size 00:23:52.167 Max: 16 00:23:52.167 Min: 16 00:23:52.167 Number of Namespaces: 32 00:23:52.167 Compare Command: Supported 00:23:52.167 Write Uncorrectable Command: Not Supported 00:23:52.167 Dataset Management Command: Supported 00:23:52.167 Write Zeroes Command: Supported 00:23:52.167 Set Features Save Field: Not Supported 00:23:52.167 Reservations: Supported 00:23:52.167 Timestamp: Not Supported 00:23:52.167 Copy: Supported 00:23:52.167 Volatile Write Cache: Present 00:23:52.167 Atomic Write Unit (Normal): 1 00:23:52.167 Atomic Write Unit (PFail): 1 00:23:52.167 Atomic Compare & Write Unit: 1 00:23:52.167 Fused Compare & Write: Supported 00:23:52.167 Scatter-Gather List 00:23:52.167 SGL Command Set: Supported 00:23:52.167 SGL Keyed: Supported 00:23:52.167 SGL Bit Bucket Descriptor: Not Supported 00:23:52.167 SGL Metadata Pointer: Not Supported 00:23:52.167 Oversized SGL: Not Supported 00:23:52.167 SGL Metadata Address: Not Supported 00:23:52.167 SGL Offset: Supported 00:23:52.167 Transport SGL Data Block: Not Supported 00:23:52.167 Replay Protected Memory Block: Not Supported 00:23:52.167 00:23:52.167 Firmware Slot Information 00:23:52.167 ========================= 00:23:52.167 Active slot: 1 00:23:52.167 Slot 1 Firmware Revision: 25.01 00:23:52.167 00:23:52.167 00:23:52.167 Commands Supported and Effects 00:23:52.167 ============================== 00:23:52.167 Admin Commands 00:23:52.167 -------------- 00:23:52.167 Get Log Page (02h): Supported 00:23:52.167 Identify (06h): Supported 00:23:52.167 Abort (08h): Supported 00:23:52.167 Set Features (09h): Supported 00:23:52.167 Get Features (0Ah): Supported 00:23:52.167 Asynchronous Event Request (0Ch): Supported 00:23:52.167 Keep Alive (18h): Supported 00:23:52.167 I/O Commands 00:23:52.167 ------------ 00:23:52.167 Flush (00h): Supported LBA-Change 00:23:52.167 Write (01h): Supported LBA-Change 00:23:52.167 Read (02h): Supported 00:23:52.167 Compare (05h): Supported 00:23:52.167 Write Zeroes (08h): Supported LBA-Change 00:23:52.167 Dataset Management (09h): Supported LBA-Change 00:23:52.167 Copy (19h): Supported LBA-Change 00:23:52.167 00:23:52.167 Error Log 00:23:52.167 ========= 00:23:52.167 00:23:52.167 Arbitration 00:23:52.167 =========== 00:23:52.167 Arbitration Burst: 1 00:23:52.167 00:23:52.167 Power Management 00:23:52.167 ================ 00:23:52.167 Number of Power States: 1 00:23:52.167 Current Power State: Power State #0 00:23:52.167 Power State #0: 00:23:52.167 Max Power: 0.00 W 00:23:52.167 Non-Operational State: Operational 00:23:52.167 Entry Latency: Not Reported 00:23:52.167 Exit Latency: Not Reported 00:23:52.167 Relative Read Throughput: 0 00:23:52.167 Relative Read Latency: 0 00:23:52.167 Relative Write Throughput: 0 00:23:52.167 Relative Write Latency: 0 00:23:52.167 Idle Power: Not Reported 00:23:52.167 Active Power: Not Reported 00:23:52.167 Non-Operational Permissive Mode: Not Supported 00:23:52.167 00:23:52.167 Health Information 00:23:52.167 ================== 00:23:52.167 Critical Warnings: 00:23:52.167 Available Spare Space: OK 00:23:52.167 Temperature: OK 00:23:52.167 Device Reliability: OK 00:23:52.167 Read Only: No 00:23:52.167 Volatile Memory Backup: OK 00:23:52.167 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:52.167 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:52.167 Available Spare: 0% 00:23:52.167 Available Spare Threshold: 0% 00:23:52.167 Life Percentage Used:[2024-11-20 09:09:17.555578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.167 [2024-11-20 09:09:17.555584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1466690) 00:23:52.167 [2024-11-20 09:09:17.555592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.167 [2024-11-20 09:09:17.555606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8b80, cid 7, qid 0 00:23:52.167 [2024-11-20 09:09:17.555789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.167 [2024-11-20 09:09:17.555799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.167 [2024-11-20 09:09:17.555802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.167 [2024-11-20 09:09:17.555806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8b80) on tqpair=0x1466690 00:23:52.167 [2024-11-20 09:09:17.555838] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:52.167 [2024-11-20 09:09:17.555848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8100) on tqpair=0x1466690 00:23:52.167 [2024-11-20 09:09:17.555854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.167 [2024-11-20 09:09:17.555860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8280) on tqpair=0x1466690 00:23:52.167 [2024-11-20 09:09:17.555865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.167 [2024-11-20 09:09:17.555870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8400) on tqpair=0x1466690 00:23:52.167 [2024-11-20 09:09:17.555874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.167 [2024-11-20 09:09:17.555879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8580) on tqpair=0x1466690 00:23:52.167 [2024-11-20 09:09:17.555884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.167 [2024-11-20 09:09:17.555893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.167 [2024-11-20 09:09:17.555896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.167 [2024-11-20 09:09:17.555900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466690) 00:23:52.167 [2024-11-20 09:09:17.555907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.167 [2024-11-20 09:09:17.555919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8580, cid 3, qid 0 00:23:52.167 [2024-11-20 09:09:17.556121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.167 [2024-11-20 09:09:17.556127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.167 [2024-11-20 09:09:17.556130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.167 [2024-11-20 09:09:17.556134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8580) on tqpair=0x1466690 00:23:52.168 [2024-11-20 09:09:17.556142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.556145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.556149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466690) 00:23:52.168 [2024-11-20 09:09:17.556156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.168 [2024-11-20 09:09:17.556179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8580, cid 3, qid 0 00:23:52.168 [2024-11-20 09:09:17.556374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.168 [2024-11-20 09:09:17.556381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.168 [2024-11-20 09:09:17.556385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.556388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8580) on tqpair=0x1466690 00:23:52.168 [2024-11-20 09:09:17.556393] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:52.168 [2024-11-20 09:09:17.556399] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:52.168 [2024-11-20 09:09:17.556409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.556415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.556424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466690) 00:23:52.168 [2024-11-20 09:09:17.556433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.168 [2024-11-20 09:09:17.556444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8580, cid 3, qid 0 00:23:52.168 [2024-11-20 09:09:17.556675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.168 [2024-11-20 09:09:17.556682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.168 [2024-11-20 09:09:17.556685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.556689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8580) on tqpair=0x1466690 00:23:52.168 [2024-11-20 09:09:17.556699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.556703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.556707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466690) 00:23:52.168 [2024-11-20 09:09:17.556713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.168 [2024-11-20 09:09:17.556724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8580, cid 3, qid 0 00:23:52.168 [2024-11-20 09:09:17.556925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.168 [2024-11-20 09:09:17.556931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.168 [2024-11-20 09:09:17.556935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.556939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8580) on tqpair=0x1466690 00:23:52.168 [2024-11-20 09:09:17.556949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.556952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.556956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466690) 00:23:52.168 [2024-11-20 09:09:17.556963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.168 [2024-11-20 09:09:17.556973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8580, cid 3, qid 0 00:23:52.168 [2024-11-20 09:09:17.561170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.168 [2024-11-20 09:09:17.561182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.168 [2024-11-20 09:09:17.561186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.561190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8580) on tqpair=0x1466690 00:23:52.168 [2024-11-20 09:09:17.561200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.561204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.561208] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466690) 00:23:52.168 [2024-11-20 09:09:17.561214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.168 [2024-11-20 09:09:17.561227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14c8580, cid 3, qid 0 00:23:52.168 [2024-11-20 09:09:17.561456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.168 [2024-11-20 09:09:17.561464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.168 [2024-11-20 09:09:17.561467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.168 [2024-11-20 09:09:17.561471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14c8580) on tqpair=0x1466690 00:23:52.168 [2024-11-20 09:09:17.561479] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:23:52.168 0% 00:23:52.168 Data Units Read: 0 00:23:52.168 Data Units Written: 0 00:23:52.168 Host Read Commands: 0 00:23:52.168 Host Write Commands: 0 00:23:52.168 Controller Busy Time: 0 minutes 00:23:52.168 Power Cycles: 0 00:23:52.168 Power On Hours: 0 hours 00:23:52.168 Unsafe Shutdowns: 0 00:23:52.168 Unrecoverable Media Errors: 0 00:23:52.168 Lifetime Error Log Entries: 0 00:23:52.168 Warning Temperature Time: 0 minutes 00:23:52.168 Critical Temperature Time: 0 minutes 00:23:52.168 00:23:52.168 Number of Queues 00:23:52.168 ================ 00:23:52.168 Number of I/O Submission Queues: 127 00:23:52.168 Number of I/O Completion Queues: 127 00:23:52.168 00:23:52.168 Active Namespaces 00:23:52.168 ================= 00:23:52.168 Namespace ID:1 00:23:52.168 Error Recovery Timeout: Unlimited 00:23:52.168 Command Set Identifier: NVM (00h) 00:23:52.168 Deallocate: Supported 00:23:52.168 Deallocated/Unwritten Error: Not Supported 00:23:52.168 Deallocated Read Value: Unknown 00:23:52.168 Deallocate in Write Zeroes: Not Supported 00:23:52.168 Deallocated Guard Field: 0xFFFF 00:23:52.168 Flush: Supported 00:23:52.168 Reservation: Supported 00:23:52.168 Namespace Sharing Capabilities: Multiple Controllers 00:23:52.168 Size (in LBAs): 131072 (0GiB) 00:23:52.168 Capacity (in LBAs): 131072 (0GiB) 00:23:52.168 Utilization (in LBAs): 131072 (0GiB) 00:23:52.168 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:52.168 EUI64: ABCDEF0123456789 00:23:52.168 UUID: 97b24774-06fd-41d7-bc13-9f0ba6737e81 00:23:52.168 Thin Provisioning: Not Supported 00:23:52.168 Per-NS Atomic Units: Yes 00:23:52.168 Atomic Boundary Size (Normal): 0 00:23:52.168 Atomic Boundary Size (PFail): 0 00:23:52.168 Atomic Boundary Offset: 0 00:23:52.168 Maximum Single Source Range Length: 65535 00:23:52.168 Maximum Copy Length: 65535 00:23:52.168 Maximum Source Range Count: 1 00:23:52.168 NGUID/EUI64 Never Reused: No 00:23:52.168 Namespace Write Protected: No 00:23:52.168 Number of LBA Formats: 1 00:23:52.168 Current LBA Format: LBA Format #00 00:23:52.168 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:52.168 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.169 rmmod nvme_tcp 00:23:52.169 rmmod nvme_fabrics 00:23:52.169 rmmod nvme_keyring 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 787468 ']' 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 787468 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 787468 ']' 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 787468 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.169 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787468 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787468' 00:23:52.431 killing process with pid 787468 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 787468 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 787468 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.431 09:09:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.979 09:09:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.979 00:23:54.979 real 0m11.627s 00:23:54.979 user 0m8.460s 00:23:54.979 sys 0m6.100s 00:23:54.979 09:09:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.979 09:09:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.979 ************************************ 00:23:54.979 END TEST nvmf_identify 00:23:54.979 ************************************ 00:23:54.979 09:09:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:54.979 09:09:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.979 09:09:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.979 09:09:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.979 ************************************ 00:23:54.979 START TEST nvmf_perf 00:23:54.979 ************************************ 00:23:54.979 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:54.979 * Looking for test storage... 00:23:54.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.979 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:54.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.980 --rc genhtml_branch_coverage=1 00:23:54.980 --rc genhtml_function_coverage=1 00:23:54.980 --rc genhtml_legend=1 00:23:54.980 --rc geninfo_all_blocks=1 00:23:54.980 --rc geninfo_unexecuted_blocks=1 00:23:54.980 00:23:54.980 ' 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:54.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.980 --rc genhtml_branch_coverage=1 00:23:54.980 --rc genhtml_function_coverage=1 00:23:54.980 --rc genhtml_legend=1 00:23:54.980 --rc geninfo_all_blocks=1 00:23:54.980 --rc geninfo_unexecuted_blocks=1 00:23:54.980 00:23:54.980 ' 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:54.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.980 --rc genhtml_branch_coverage=1 00:23:54.980 --rc genhtml_function_coverage=1 00:23:54.980 --rc genhtml_legend=1 00:23:54.980 --rc geninfo_all_blocks=1 00:23:54.980 --rc geninfo_unexecuted_blocks=1 00:23:54.980 00:23:54.980 ' 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:54.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.980 --rc genhtml_branch_coverage=1 00:23:54.980 --rc genhtml_function_coverage=1 00:23:54.980 --rc genhtml_legend=1 00:23:54.980 --rc geninfo_all_blocks=1 00:23:54.980 --rc geninfo_unexecuted_blocks=1 00:23:54.980 00:23:54.980 ' 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.980 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.981 09:09:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:03.126 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:03.126 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.126 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:03.127 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:03.127 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:24:03.127 00:24:03.127 --- 10.0.0.2 ping statistics --- 00:24:03.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.127 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:24:03.127 00:24:03.127 --- 10.0.0.1 ping statistics --- 00:24:03.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.127 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=791964 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 791964 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 791964 ']' 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.127 09:09:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.127 [2024-11-20 09:09:27.924199] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:24:03.127 [2024-11-20 09:09:27.924267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.127 [2024-11-20 09:09:28.026250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.127 [2024-11-20 09:09:28.078852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.127 [2024-11-20 09:09:28.078908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.127 [2024-11-20 09:09:28.078917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.127 [2024-11-20 09:09:28.078924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.127 [2024-11-20 09:09:28.078931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.127 [2024-11-20 09:09:28.081077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.127 [2024-11-20 09:09:28.081238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.127 [2024-11-20 09:09:28.081307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.127 [2024-11-20 09:09:28.081309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.390 09:09:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.390 09:09:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:03.390 09:09:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.390 09:09:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.390 09:09:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.390 09:09:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.390 09:09:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:03.390 09:09:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:03.962 09:09:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:03.962 09:09:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:04.223 09:09:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:04.223 09:09:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:04.223 09:09:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:04.223 09:09:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:04.223 09:09:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:04.223 09:09:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:04.223 09:09:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:04.484 [2024-11-20 09:09:29.899326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.484 09:09:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:04.745 09:09:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:04.745 09:09:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:05.005 09:09:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:05.005 09:09:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:05.005 09:09:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.266 [2024-11-20 09:09:30.687030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.266 09:09:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:05.525 09:09:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:05.525 09:09:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:05.525 09:09:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:05.525 09:09:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:06.908 Initializing NVMe Controllers 00:24:06.908 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:06.908 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:06.908 Initialization complete. Launching workers. 00:24:06.908 ======================================================== 00:24:06.908 Latency(us) 00:24:06.908 Device Information : IOPS MiB/s Average min max 00:24:06.908 PCIE (0000:65:00.0) NSID 1 from core 0: 77880.56 304.22 410.13 13.31 5278.58 00:24:06.908 ======================================================== 00:24:06.908 Total : 77880.56 304.22 410.13 13.31 5278.58 00:24:06.908 00:24:06.908 09:09:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:08.291 Initializing NVMe Controllers 00:24:08.291 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:08.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:08.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:08.291 Initialization complete. Launching workers. 00:24:08.291 ======================================================== 00:24:08.291 Latency(us) 00:24:08.291 Device Information : IOPS MiB/s Average min max 00:24:08.291 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 113.59 0.44 8823.09 231.29 45635.19 00:24:08.291 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 52.81 0.21 19237.09 7953.64 47888.89 00:24:08.291 ======================================================== 00:24:08.291 Total : 166.40 0.65 12128.13 231.29 47888.89 00:24:08.291 00:24:08.291 09:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:09.232 Initializing NVMe Controllers 00:24:09.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:09.232 Initialization complete. Launching workers. 00:24:09.232 ======================================================== 00:24:09.232 Latency(us) 00:24:09.232 Device Information : IOPS MiB/s Average min max 00:24:09.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11572.99 45.21 2765.38 474.27 6294.96 00:24:09.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3827.00 14.95 8420.18 4410.74 18318.47 00:24:09.232 ======================================================== 00:24:09.232 Total : 15399.98 60.16 4170.63 474.27 18318.47 00:24:09.232 00:24:09.232 09:09:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:09.232 09:09:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:09.232 09:09:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.772 Initializing NVMe Controllers 00:24:11.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.772 Controller IO queue size 128, less than required. 00:24:11.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:11.772 Controller IO queue size 128, less than required. 00:24:11.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:11.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:11.772 Initialization complete. Launching workers. 00:24:11.772 ======================================================== 00:24:11.772 Latency(us) 00:24:11.772 Device Information : IOPS MiB/s Average min max 00:24:11.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1873.36 468.34 69998.89 39524.60 120365.57 00:24:11.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 607.96 151.99 215780.65 71914.17 350928.55 00:24:11.772 ======================================================== 00:24:11.772 Total : 2481.32 620.33 105717.33 39524.60 350928.55 00:24:11.772 00:24:11.772 09:09:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:12.342 No valid NVMe controllers or AIO or URING devices found 00:24:12.342 Initializing NVMe Controllers 00:24:12.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.342 Controller IO queue size 128, less than required. 00:24:12.342 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:12.342 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:12.342 Controller IO queue size 128, less than required. 00:24:12.342 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:12.342 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:12.342 WARNING: Some requested NVMe devices were skipped 00:24:12.342 09:09:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:14.884 Initializing NVMe Controllers 00:24:14.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:14.884 Controller IO queue size 128, less than required. 00:24:14.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.884 Controller IO queue size 128, less than required. 00:24:14.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:14.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:14.884 Initialization complete. Launching workers. 00:24:14.884 00:24:14.884 ==================== 00:24:14.884 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:14.884 TCP transport: 00:24:14.884 polls: 42364 00:24:14.884 idle_polls: 28791 00:24:14.884 sock_completions: 13573 00:24:14.884 nvme_completions: 7241 00:24:14.884 submitted_requests: 10956 00:24:14.884 queued_requests: 1 00:24:14.884 00:24:14.884 ==================== 00:24:14.884 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:14.884 TCP transport: 00:24:14.884 polls: 38372 00:24:14.884 idle_polls: 23952 00:24:14.884 sock_completions: 14420 00:24:14.884 nvme_completions: 7375 00:24:14.884 submitted_requests: 11094 00:24:14.884 queued_requests: 1 00:24:14.884 ======================================================== 00:24:14.884 Latency(us) 00:24:14.884 Device Information : IOPS MiB/s Average min max 00:24:14.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1809.98 452.50 71943.04 48921.69 129136.45 00:24:14.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1843.48 460.87 70552.25 32805.03 139580.21 00:24:14.884 ======================================================== 00:24:14.884 Total : 3653.47 913.37 71241.27 32805.03 139580.21 00:24:14.884 00:24:14.884 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:14.884 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:15.145 rmmod nvme_tcp 00:24:15.145 rmmod nvme_fabrics 00:24:15.145 rmmod nvme_keyring 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 791964 ']' 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 791964 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 791964 ']' 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 791964 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 791964 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 791964' 00:24:15.145 killing process with pid 791964 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 791964 00:24:15.145 09:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 791964 00:24:17.058 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:17.058 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:17.058 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:17.058 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:17.058 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:17.058 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:17.058 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:17.058 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.058 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.059 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.059 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.059 09:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.077 09:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.377 00:24:19.377 real 0m24.496s 00:24:19.377 user 0m59.188s 00:24:19.377 sys 0m8.695s 00:24:19.377 09:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.377 09:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.377 ************************************ 00:24:19.377 END TEST nvmf_perf 00:24:19.377 ************************************ 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.378 ************************************ 00:24:19.378 START TEST nvmf_fio_host 00:24:19.378 ************************************ 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:19.378 * Looking for test storage... 00:24:19.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:19.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.378 --rc genhtml_branch_coverage=1 00:24:19.378 --rc genhtml_function_coverage=1 00:24:19.378 --rc genhtml_legend=1 00:24:19.378 --rc geninfo_all_blocks=1 00:24:19.378 --rc geninfo_unexecuted_blocks=1 00:24:19.378 00:24:19.378 ' 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:19.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.378 --rc genhtml_branch_coverage=1 00:24:19.378 --rc genhtml_function_coverage=1 00:24:19.378 --rc genhtml_legend=1 00:24:19.378 --rc geninfo_all_blocks=1 00:24:19.378 --rc geninfo_unexecuted_blocks=1 00:24:19.378 00:24:19.378 ' 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:19.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.378 --rc genhtml_branch_coverage=1 00:24:19.378 --rc genhtml_function_coverage=1 00:24:19.378 --rc genhtml_legend=1 00:24:19.378 --rc geninfo_all_blocks=1 00:24:19.378 --rc geninfo_unexecuted_blocks=1 00:24:19.378 00:24:19.378 ' 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:19.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.378 --rc genhtml_branch_coverage=1 00:24:19.378 --rc genhtml_function_coverage=1 00:24:19.378 --rc genhtml_legend=1 00:24:19.378 --rc geninfo_all_blocks=1 00:24:19.378 --rc geninfo_unexecuted_blocks=1 00:24:19.378 00:24:19.378 ' 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.378 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.668 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.668 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:19.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:19.669 09:09:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:27.816 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:27.816 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:27.816 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:27.816 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:24:27.816 00:24:27.816 --- 10.0.0.2 ping statistics --- 00:24:27.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.816 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:24:27.816 00:24:27.816 --- 10.0.0.1 ping statistics --- 00:24:27.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.816 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.816 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=799033 00:24:27.817 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.817 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:27.817 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 799033 00:24:27.817 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 799033 ']' 00:24:27.817 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.817 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.817 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.817 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.817 09:09:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.817 [2024-11-20 09:09:52.477599] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:24:27.817 [2024-11-20 09:09:52.477665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.817 [2024-11-20 09:09:52.576846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.817 [2024-11-20 09:09:52.629734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.817 [2024-11-20 09:09:52.629786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.817 [2024-11-20 09:09:52.629795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.817 [2024-11-20 09:09:52.629802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.817 [2024-11-20 09:09:52.629808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.817 [2024-11-20 09:09:52.631834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.817 [2024-11-20 09:09:52.631997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.817 [2024-11-20 09:09:52.632169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.817 [2024-11-20 09:09:52.632181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.817 09:09:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.817 09:09:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:27.817 09:09:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:28.078 [2024-11-20 09:09:53.472850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.078 09:09:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:28.078 09:09:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.078 09:09:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.078 09:09:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:28.353 Malloc1 00:24:28.353 09:09:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.615 09:09:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:28.876 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.876 [2024-11-20 09:09:54.336279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.876 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:29.137 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:29.138 09:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:29.398 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:29.398 fio-3.35 00:24:29.398 Starting 1 thread 00:24:31.946 00:24:31.946 test: (groupid=0, jobs=1): err= 0: pid=799679: Wed Nov 20 09:09:57 2024 00:24:31.946 read: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2004msec) 00:24:31.946 slat (usec): min=2, max=288, avg= 2.15, stdev= 2.47 00:24:31.946 clat (usec): min=3298, max=8687, avg=5101.47, stdev=358.50 00:24:31.946 lat (usec): min=3301, max=8689, avg=5103.62, stdev=358.54 00:24:31.946 clat percentiles (usec): 00:24:31.946 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:31.946 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:24:31.946 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:24:31.946 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 7373], 99.95th=[ 8160], 00:24:31.946 | 99.99th=[ 8717] 00:24:31.946 bw ( KiB/s): min=53816, max=55728, per=99.94%, avg=55164.00, stdev=903.32, samples=4 00:24:31.946 iops : min=13454, max=13932, avg=13791.00, stdev=225.83, samples=4 00:24:31.946 write: IOPS=13.8k, BW=53.8MiB/s (56.5MB/s)(108MiB/2004msec); 0 zone resets 00:24:31.946 slat (usec): min=2, max=265, avg= 2.21, stdev= 1.76 00:24:31.946 clat (usec): min=2692, max=8101, avg=4117.83, stdev=294.85 00:24:31.946 lat (usec): min=2694, max=8103, avg=4120.05, stdev=294.92 00:24:31.946 clat percentiles (usec): 00:24:31.946 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:24:31.946 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:24:31.946 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:24:31.946 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5735], 99.95th=[ 6849], 00:24:31.946 | 99.99th=[ 7832] 00:24:31.946 bw ( KiB/s): min=54136, max=55664, per=100.00%, avg=55134.00, stdev=699.15, samples=4 00:24:31.946 iops : min=13534, max=13916, avg=13783.50, stdev=174.79, samples=4 00:24:31.946 lat (msec) : 4=16.22%, 10=83.78% 00:24:31.946 cpu : usr=74.04%, sys=24.76%, ctx=34, majf=0, minf=17 00:24:31.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:31.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:31.946 issued rwts: total=27654,27622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:31.946 00:24:31.946 Run status group 0 (all jobs): 00:24:31.946 READ: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:31.946 WRITE: bw=53.8MiB/s (56.5MB/s), 53.8MiB/s-53.8MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:31.946 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:31.946 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:31.946 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:31.946 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:31.946 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:31.947 09:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:32.514 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:32.514 fio-3.35 00:24:32.514 Starting 1 thread 00:24:35.059 00:24:35.059 test: (groupid=0, jobs=1): err= 0: pid=800388: Wed Nov 20 09:10:00 2024 00:24:35.059 read: IOPS=9403, BW=147MiB/s (154MB/s)(295MiB/2006msec) 00:24:35.059 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.62 00:24:35.059 clat (usec): min=2014, max=15458, avg=8290.44, stdev=2007.38 00:24:35.059 lat (usec): min=2018, max=15461, avg=8294.04, stdev=2007.53 00:24:35.059 clat percentiles (usec): 00:24:35.059 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6456], 00:24:35.059 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8717], 00:24:35.059 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[10945], 95.00th=[11731], 00:24:35.059 | 99.00th=[13042], 99.50th=[13435], 99.90th=[13960], 99.95th=[14222], 00:24:35.059 | 99.99th=[14615] 00:24:35.059 bw ( KiB/s): min=71104, max=82816, per=49.70%, avg=74768.00, stdev=5449.31, samples=4 00:24:35.059 iops : min= 4444, max= 5176, avg=4673.00, stdev=340.58, samples=4 00:24:35.059 write: IOPS=5511, BW=86.1MiB/s (90.3MB/s)(153MiB/1776msec); 0 zone resets 00:24:35.059 slat (usec): min=39, max=447, avg=41.04, stdev= 8.57 00:24:35.059 clat (usec): min=2071, max=15551, avg=9090.43, stdev=1331.52 00:24:35.059 lat (usec): min=2111, max=15689, avg=9131.47, stdev=1333.91 00:24:35.059 clat percentiles (usec): 00:24:35.059 | 1.00th=[ 6194], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[ 8029], 00:24:35.059 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:24:35.059 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[11207], 00:24:35.059 | 99.00th=[12649], 99.50th=[13698], 99.90th=[15139], 99.95th=[15270], 00:24:35.059 | 99.99th=[15533] 00:24:35.059 bw ( KiB/s): min=73952, max=85952, per=88.46%, avg=78016.00, stdev=5385.71, samples=4 00:24:35.059 iops : min= 4622, max= 5372, avg=4876.00, stdev=336.61, samples=4 00:24:35.059 lat (msec) : 4=0.53%, 10=76.83%, 20=22.64% 00:24:35.059 cpu : usr=84.89%, sys=13.57%, ctx=16, majf=0, minf=27 00:24:35.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:35.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:35.059 issued rwts: total=18863,9789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:35.059 00:24:35.059 Run status group 0 (all jobs): 00:24:35.059 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=295MiB (309MB), run=2006-2006msec 00:24:35.059 WRITE: bw=86.1MiB/s (90.3MB/s), 86.1MiB/s-86.1MiB/s (90.3MB/s-90.3MB/s), io=153MiB (160MB), run=1776-1776msec 00:24:35.059 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.059 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:35.059 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:35.060 rmmod nvme_tcp 00:24:35.060 rmmod nvme_fabrics 00:24:35.060 rmmod nvme_keyring 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 799033 ']' 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 799033 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 799033 ']' 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 799033 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 799033 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 799033' 00:24:35.060 killing process with pid 799033 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 799033 00:24:35.060 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 799033 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.322 09:10:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.235 09:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:37.235 00:24:37.235 real 0m18.043s 00:24:37.235 user 0m59.001s 00:24:37.235 sys 0m7.907s 00:24:37.235 09:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.235 09:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.235 ************************************ 00:24:37.235 END TEST nvmf_fio_host 00:24:37.235 ************************************ 00:24:37.235 09:10:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:37.235 09:10:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:37.235 09:10:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.235 09:10:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.496 ************************************ 00:24:37.496 START TEST nvmf_failover 00:24:37.496 ************************************ 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:37.496 * Looking for test storage... 00:24:37.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.496 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:37.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.497 --rc genhtml_branch_coverage=1 00:24:37.497 --rc genhtml_function_coverage=1 00:24:37.497 --rc genhtml_legend=1 00:24:37.497 --rc geninfo_all_blocks=1 00:24:37.497 --rc geninfo_unexecuted_blocks=1 00:24:37.497 00:24:37.497 ' 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:37.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.497 --rc genhtml_branch_coverage=1 00:24:37.497 --rc genhtml_function_coverage=1 00:24:37.497 --rc genhtml_legend=1 00:24:37.497 --rc geninfo_all_blocks=1 00:24:37.497 --rc geninfo_unexecuted_blocks=1 00:24:37.497 00:24:37.497 ' 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:37.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.497 --rc genhtml_branch_coverage=1 00:24:37.497 --rc genhtml_function_coverage=1 00:24:37.497 --rc genhtml_legend=1 00:24:37.497 --rc geninfo_all_blocks=1 00:24:37.497 --rc geninfo_unexecuted_blocks=1 00:24:37.497 00:24:37.497 ' 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:37.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.497 --rc genhtml_branch_coverage=1 00:24:37.497 --rc genhtml_function_coverage=1 00:24:37.497 --rc genhtml_legend=1 00:24:37.497 --rc geninfo_all_blocks=1 00:24:37.497 --rc geninfo_unexecuted_blocks=1 00:24:37.497 00:24:37.497 ' 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:37.497 09:10:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:37.497 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.758 09:10:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:45.905 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:45.905 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:45.905 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:45.905 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.905 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:45.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:24:45.906 00:24:45.906 --- 10.0.0.2 ping statistics --- 00:24:45.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.906 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:24:45.906 00:24:45.906 --- 10.0.0.1 ping statistics --- 00:24:45.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.906 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=805052 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 805052 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 805052 ']' 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.906 09:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.906 [2024-11-20 09:10:10.573474] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:24:45.906 [2024-11-20 09:10:10.573540] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.906 [2024-11-20 09:10:10.672140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:45.906 [2024-11-20 09:10:10.723532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.906 [2024-11-20 09:10:10.723581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.906 [2024-11-20 09:10:10.723589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.906 [2024-11-20 09:10:10.723596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.906 [2024-11-20 09:10:10.723603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.906 [2024-11-20 09:10:10.725476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.906 [2024-11-20 09:10:10.725643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.906 [2024-11-20 09:10:10.725644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.906 09:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.906 09:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:45.906 09:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:45.906 09:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:45.906 09:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.906 09:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.906 09:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:46.167 [2024-11-20 09:10:11.592920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.167 09:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:46.428 Malloc0 00:24:46.428 09:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:46.689 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:46.950 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.950 [2024-11-20 09:10:12.400803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.950 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:47.211 [2024-11-20 09:10:12.593380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:47.211 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:47.472 [2024-11-20 09:10:12.786072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:47.472 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:47.472 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=805421 00:24:47.472 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:47.472 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 805421 /var/tmp/bdevperf.sock 00:24:47.472 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 805421 ']' 00:24:47.472 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.472 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.472 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.472 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.472 09:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.414 09:10:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.414 09:10:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:48.414 09:10:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:48.673 NVMe0n1 00:24:48.673 09:10:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:48.933 00:24:48.933 09:10:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=805753 00:24:48.933 09:10:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.933 09:10:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:49.874 09:10:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.136 [2024-11-20 09:10:15.539925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.539962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.539968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.539973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.539978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.539983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.539988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.539993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.539997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.136 [2024-11-20 09:10:15.540286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 [2024-11-20 09:10:15.540551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10314f0 is same with the state(6) to be set 00:24:50.137 09:10:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:53.438 09:10:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:53.438 00:24:53.438 09:10:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:53.699 [2024-11-20 09:10:18.994231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.699 [2024-11-20 09:10:18.994268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.699 [2024-11-20 09:10:18.994275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.699 [2024-11-20 09:10:18.994280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.699 [2024-11-20 09:10:18.994285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.699 [2024-11-20 09:10:18.994290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.699 [2024-11-20 09:10:18.994295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.699 [2024-11-20 09:10:18.994299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.699 [2024-11-20 09:10:18.994304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.700 [2024-11-20 09:10:18.994703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.701 [2024-11-20 09:10:18.994708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032040 is same with the state(6) to be set 00:24:53.701 09:10:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:56.999 09:10:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.999 [2024-11-20 09:10:22.185800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.999 09:10:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:57.943 09:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:57.943 [2024-11-20 09:10:23.371924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef74c0 is same with the state(6) to be set 00:24:57.943 [2024-11-20 09:10:23.371960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef74c0 is same with the state(6) to be set 00:24:57.943 [2024-11-20 09:10:23.371966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef74c0 is same with the state(6) to be set 00:24:57.943 [2024-11-20 09:10:23.371971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef74c0 is same with the state(6) to be set 00:24:57.943 [2024-11-20 09:10:23.371976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef74c0 is same with the state(6) to be set 00:24:57.943 [2024-11-20 09:10:23.371981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef74c0 is same with the state(6) to be set 00:24:57.943 [2024-11-20 09:10:23.371986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef74c0 is same with the state(6) to be set 00:24:57.943 09:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 805753 00:25:04.539 { 00:25:04.539 "results": [ 00:25:04.539 { 00:25:04.539 "job": "NVMe0n1", 00:25:04.539 "core_mask": "0x1", 00:25:04.539 "workload": "verify", 00:25:04.539 "status": "finished", 00:25:04.539 "verify_range": { 00:25:04.539 "start": 0, 00:25:04.539 "length": 16384 00:25:04.539 }, 00:25:04.539 "queue_depth": 128, 00:25:04.539 "io_size": 4096, 00:25:04.539 "runtime": 15.007225, 00:25:04.539 "iops": 12412.821157808989, 00:25:04.539 "mibps": 48.48758264769136, 00:25:04.539 "io_failed": 5749, 00:25:04.539 "io_timeout": 0, 00:25:04.539 "avg_latency_us": 9981.62149958427, 00:25:04.539 "min_latency_us": 366.93333333333334, 00:25:04.539 "max_latency_us": 20097.706666666665 00:25:04.539 } 00:25:04.539 ], 00:25:04.539 "core_count": 1 00:25:04.539 } 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 805421 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 805421 ']' 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 805421 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 805421 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 805421' 00:25:04.539 killing process with pid 805421 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 805421 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 805421 00:25:04.539 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:04.539 [2024-11-20 09:10:12.865649] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:25:04.539 [2024-11-20 09:10:12.865730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805421 ] 00:25:04.539 [2024-11-20 09:10:12.960777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.539 [2024-11-20 09:10:13.003509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.539 Running I/O for 15 seconds... 00:25:04.539 11274.00 IOPS, 44.04 MiB/s [2024-11-20T08:10:30.068Z] [2024-11-20 09:10:15.540867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.540900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.540918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.540927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.540937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.540945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.540954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.540961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.540971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.540979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.540988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.540996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.539 [2024-11-20 09:10:15.541309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.539 [2024-11-20 09:10:15.541492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.539 [2024-11-20 09:10:15.541502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.541984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.541993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.540 [2024-11-20 09:10:15.542758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.540 [2024-11-20 09:10:15.542768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.542983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.542990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.543000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.543008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.543018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.543025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.543034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:15.543041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.543066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.541 [2024-11-20 09:10:15.543073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.541 [2024-11-20 09:10:15.543080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97448 len:8 PRP1 0x0 PRP2 0x0 00:25:04.541 [2024-11-20 09:10:15.543089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.543130] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:04.541 [2024-11-20 09:10:15.543151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.541 [2024-11-20 09:10:15.543164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.543173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.541 [2024-11-20 09:10:15.543180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.543188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.541 [2024-11-20 09:10:15.543195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.543204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.541 [2024-11-20 09:10:15.543211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:15.543225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:04.541 [2024-11-20 09:10:15.546788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:04.541 [2024-11-20 09:10:15.546811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x579d70 (9): Bad file descriptor 00:25:04.541 [2024-11-20 09:10:15.618032] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:04.541 10898.00 IOPS, 42.57 MiB/s [2024-11-20T08:10:30.070Z] 11012.67 IOPS, 43.02 MiB/s [2024-11-20T08:10:30.070Z] 11470.75 IOPS, 44.81 MiB/s [2024-11-20T08:10:30.070Z] [2024-11-20 09:10:18.995716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.995990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.995996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.541 [2024-11-20 09:10:18.996170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.541 [2024-11-20 09:10:18.996183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.541 [2024-11-20 09:10:18.996194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.541 [2024-11-20 09:10:18.996206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.541 [2024-11-20 09:10:18.996218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.541 [2024-11-20 09:10:18.996225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.541 [2024-11-20 09:10:18.996230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.996989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.996994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.997000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.997005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.997011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.997017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.997023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.997028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.997034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.997039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.997045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.997050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.997057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.997062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.997069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.997074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.997080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.997085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.997092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.997098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.997105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.542 [2024-11-20 09:10:18.997110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.542 [2024-11-20 09:10:18.997116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.543 [2024-11-20 09:10:18.997121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.543 [2024-11-20 09:10:18.997132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.543 [2024-11-20 09:10:18.997143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.543 [2024-11-20 09:10:18.997155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.543 [2024-11-20 09:10:18.997169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.543 [2024-11-20 09:10:18.997180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.543 [2024-11-20 09:10:18.997191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.543 [2024-11-20 09:10:18.997202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.543 [2024-11-20 09:10:18.997214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.543 [2024-11-20 09:10:18.997225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.543 [2024-11-20 09:10:18.997245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.543 [2024-11-20 09:10:18.997250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49648 len:8 PRP1 0x0 PRP2 0x0 00:25:04.543 [2024-11-20 09:10:18.997256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997288] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:04.543 [2024-11-20 09:10:18.997305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.543 [2024-11-20 09:10:18.997313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.543 [2024-11-20 09:10:18.997330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.543 [2024-11-20 09:10:18.997345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.543 [2024-11-20 09:10:18.997356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:18.997361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:04.543 [2024-11-20 09:10:18.999784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:04.543 [2024-11-20 09:10:18.999805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x579d70 (9): Bad file descriptor 00:25:04.543 [2024-11-20 09:10:19.025292] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:04.543 11658.00 IOPS, 45.54 MiB/s [2024-11-20T08:10:30.072Z] 11879.83 IOPS, 46.41 MiB/s [2024-11-20T08:10:30.072Z] 12000.86 IOPS, 46.88 MiB/s [2024-11-20T08:10:30.072Z] 12130.12 IOPS, 47.38 MiB/s [2024-11-20T08:10:30.072Z] [2024-11-20 09:10:23.372355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.372993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.372998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.373004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.373009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.373016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.373020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.373027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.373032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.373038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.373045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.373053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.373058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.543 [2024-11-20 09:10:23.373064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.543 [2024-11-20 09:10:23.373070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.544 [2024-11-20 09:10:23.373082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.544 [2024-11-20 09:10:23.373093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.544 [2024-11-20 09:10:23.373104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.544 [2024-11-20 09:10:23.373673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.544 [2024-11-20 09:10:23.373698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115864 len:8 PRP1 0x0 PRP2 0x0 00:25:04.544 [2024-11-20 09:10:23.373703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.544 [2024-11-20 09:10:23.373714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.544 [2024-11-20 09:10:23.373719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115872 len:8 PRP1 0x0 PRP2 0x0 00:25:04.544 [2024-11-20 09:10:23.373724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.544 [2024-11-20 09:10:23.373733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.544 [2024-11-20 09:10:23.373737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115880 len:8 PRP1 0x0 PRP2 0x0 00:25:04.544 [2024-11-20 09:10:23.373742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.544 [2024-11-20 09:10:23.373751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.544 [2024-11-20 09:10:23.373756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115888 len:8 PRP1 0x0 PRP2 0x0 00:25:04.544 [2024-11-20 09:10:23.373761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.544 [2024-11-20 09:10:23.373769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.544 [2024-11-20 09:10:23.373774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115896 len:8 PRP1 0x0 PRP2 0x0 00:25:04.544 [2024-11-20 09:10:23.373779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.544 [2024-11-20 09:10:23.373788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.544 [2024-11-20 09:10:23.373792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115904 len:8 PRP1 0x0 PRP2 0x0 00:25:04.544 [2024-11-20 09:10:23.373797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.544 [2024-11-20 09:10:23.373808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.544 [2024-11-20 09:10:23.373812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115912 len:8 PRP1 0x0 PRP2 0x0 00:25:04.544 [2024-11-20 09:10:23.373817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.544 [2024-11-20 09:10:23.373826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.544 [2024-11-20 09:10:23.373830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115920 len:8 PRP1 0x0 PRP2 0x0 00:25:04.544 [2024-11-20 09:10:23.373835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.544 [2024-11-20 09:10:23.373845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.544 [2024-11-20 09:10:23.373849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115928 len:8 PRP1 0x0 PRP2 0x0 00:25:04.544 [2024-11-20 09:10:23.373854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.544 [2024-11-20 09:10:23.373863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.544 [2024-11-20 09:10:23.373867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115936 len:8 PRP1 0x0 PRP2 0x0 00:25:04.544 [2024-11-20 09:10:23.373872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.544 [2024-11-20 09:10:23.373878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.544 [2024-11-20 09:10:23.373882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.544 [2024-11-20 09:10:23.373886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115944 len:8 PRP1 0x0 PRP2 0x0 00:25:04.545 [2024-11-20 09:10:23.373891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.373896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.545 [2024-11-20 09:10:23.373900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.545 [2024-11-20 09:10:23.373904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115952 len:8 PRP1 0x0 PRP2 0x0 00:25:04.545 [2024-11-20 09:10:23.373909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.373914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.545 [2024-11-20 09:10:23.373918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.545 [2024-11-20 09:10:23.373923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115424 len:8 PRP1 0x0 PRP2 0x0 00:25:04.545 [2024-11-20 09:10:23.373928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.373933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.545 [2024-11-20 09:10:23.373937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.545 [2024-11-20 09:10:23.373941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115432 len:8 PRP1 0x0 PRP2 0x0 00:25:04.545 [2024-11-20 09:10:23.373947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.373953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.545 [2024-11-20 09:10:23.373957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.545 [2024-11-20 09:10:23.373961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115440 len:8 PRP1 0x0 PRP2 0x0 00:25:04.545 [2024-11-20 09:10:23.373966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.373971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.545 [2024-11-20 09:10:23.373975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.545 [2024-11-20 09:10:23.373982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115448 len:8 PRP1 0x0 PRP2 0x0 00:25:04.545 [2024-11-20 09:10:23.373987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.373992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.545 [2024-11-20 09:10:23.373996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.545 [2024-11-20 09:10:23.374000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115456 len:8 PRP1 0x0 PRP2 0x0 00:25:04.545 [2024-11-20 09:10:23.374005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.374011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.545 [2024-11-20 09:10:23.374014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.545 [2024-11-20 09:10:23.374018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115464 len:8 PRP1 0x0 PRP2 0x0 00:25:04.545 [2024-11-20 09:10:23.374024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.374029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.545 [2024-11-20 09:10:23.374033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.545 [2024-11-20 09:10:23.374037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115472 len:8 PRP1 0x0 PRP2 0x0 00:25:04.545 [2024-11-20 09:10:23.374042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.384993] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:04.545 [2024-11-20 09:10:23.385042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.545 [2024-11-20 09:10:23.385053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.385063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.545 [2024-11-20 09:10:23.385070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.385078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.545 [2024-11-20 09:10:23.385084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.385092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.545 [2024-11-20 09:10:23.385111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.545 [2024-11-20 09:10:23.385118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:04.545 [2024-11-20 09:10:23.385167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x579d70 (9): Bad file descriptor 00:25:04.545 [2024-11-20 09:10:23.388424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:04.545 [2024-11-20 09:10:23.412242] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:04.545 12143.67 IOPS, 47.44 MiB/s [2024-11-20T08:10:30.074Z] 12195.20 IOPS, 47.64 MiB/s [2024-11-20T08:10:30.074Z] 12259.27 IOPS, 47.89 MiB/s [2024-11-20T08:10:30.074Z] 12297.67 IOPS, 48.04 MiB/s [2024-11-20T08:10:30.074Z] 12347.77 IOPS, 48.23 MiB/s [2024-11-20T08:10:30.074Z] 12387.50 IOPS, 48.39 MiB/s 00:25:04.545 Latency(us) 00:25:04.545 [2024-11-20T08:10:30.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.545 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:04.545 Verification LBA range: start 0x0 length 0x4000 00:25:04.545 NVMe0n1 : 15.01 12412.82 48.49 383.08 0.00 9981.62 366.93 20097.71 00:25:04.545 [2024-11-20T08:10:30.074Z] =================================================================================================================== 00:25:04.545 [2024-11-20T08:10:30.074Z] Total : 12412.82 48.49 383.08 0.00 9981.62 366.93 20097.71 00:25:04.545 Received shutdown signal, test time was about 15.000000 seconds 00:25:04.545 00:25:04.545 Latency(us) 00:25:04.545 [2024-11-20T08:10:30.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.545 [2024-11-20T08:10:30.074Z] =================================================================================================================== 00:25:04.545 [2024-11-20T08:10:30.074Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=808765 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 808765 /var/tmp/bdevperf.sock 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 808765 ']' 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.545 09:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:05.116 09:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.116 09:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:05.116 09:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:05.377 [2024-11-20 09:10:30.720258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:05.377 09:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:05.638 [2024-11-20 09:10:30.904730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:05.638 09:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:05.898 NVMe0n1 00:25:05.898 09:10:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:06.158 00:25:06.158 09:10:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:06.419 00:25:06.419 09:10:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:06.419 09:10:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:06.679 09:10:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:06.679 09:10:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:09.976 09:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:09.976 09:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:09.976 09:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=809792 00:25:09.976 09:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:09.976 09:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 809792 00:25:11.359 { 00:25:11.359 "results": [ 00:25:11.359 { 00:25:11.359 "job": "NVMe0n1", 00:25:11.359 "core_mask": "0x1", 00:25:11.359 "workload": "verify", 00:25:11.359 "status": "finished", 00:25:11.359 "verify_range": { 00:25:11.359 "start": 0, 00:25:11.359 "length": 16384 00:25:11.359 }, 00:25:11.359 "queue_depth": 128, 00:25:11.359 "io_size": 4096, 00:25:11.359 "runtime": 1.00436, 00:25:11.359 "iops": 12683.69907204588, 00:25:11.359 "mibps": 49.54569950017922, 00:25:11.359 "io_failed": 0, 00:25:11.359 "io_timeout": 0, 00:25:11.359 "avg_latency_us": 10056.06717848078, 00:25:11.359 "min_latency_us": 962.56, 00:25:11.359 "max_latency_us": 13598.72 00:25:11.359 } 00:25:11.359 ], 00:25:11.359 "core_count": 1 00:25:11.359 } 00:25:11.359 09:10:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:11.359 [2024-11-20 09:10:29.767140] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:25:11.359 [2024-11-20 09:10:29.767204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid808765 ] 00:25:11.359 [2024-11-20 09:10:29.851221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.359 [2024-11-20 09:10:29.880334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.359 [2024-11-20 09:10:32.163911] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:11.359 [2024-11-20 09:10:32.163947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.359 [2024-11-20 09:10:32.163955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.359 [2024-11-20 09:10:32.163962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.359 [2024-11-20 09:10:32.163968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.360 [2024-11-20 09:10:32.163973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.360 [2024-11-20 09:10:32.163979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.360 [2024-11-20 09:10:32.163984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.360 [2024-11-20 09:10:32.163989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.360 [2024-11-20 09:10:32.163995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:11.360 [2024-11-20 09:10:32.164014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:11.360 [2024-11-20 09:10:32.164025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x887d70 (9): Bad file descriptor 00:25:11.360 [2024-11-20 09:10:32.216130] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:11.360 Running I/O for 1 seconds... 00:25:11.360 12611.00 IOPS, 49.26 MiB/s 00:25:11.360 Latency(us) 00:25:11.360 [2024-11-20T08:10:36.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.360 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:11.360 Verification LBA range: start 0x0 length 0x4000 00:25:11.360 NVMe0n1 : 1.00 12683.70 49.55 0.00 0.00 10056.07 962.56 13598.72 00:25:11.360 [2024-11-20T08:10:36.889Z] =================================================================================================================== 00:25:11.360 [2024-11-20T08:10:36.889Z] Total : 12683.70 49.55 0.00 0.00 10056.07 962.56 13598.72 00:25:11.360 09:10:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:11.360 09:10:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:11.360 09:10:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:11.360 09:10:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:11.360 09:10:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:11.620 09:10:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:11.881 09:10:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 808765 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 808765 ']' 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 808765 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 808765 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 808765' 00:25:15.202 killing process with pid 808765 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 808765 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 808765 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:15.202 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:15.463 rmmod nvme_tcp 00:25:15.463 rmmod nvme_fabrics 00:25:15.463 rmmod nvme_keyring 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 805052 ']' 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 805052 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 805052 ']' 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 805052 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 805052 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 805052' 00:25:15.463 killing process with pid 805052 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 805052 00:25:15.463 09:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 805052 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.725 09:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.638 09:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:17.638 00:25:17.638 real 0m40.350s 00:25:17.638 user 2m4.132s 00:25:17.638 sys 0m8.699s 00:25:17.638 09:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:17.638 09:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:17.638 ************************************ 00:25:17.638 END TEST nvmf_failover 00:25:17.638 ************************************ 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.899 ************************************ 00:25:17.899 START TEST nvmf_host_discovery 00:25:17.899 ************************************ 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:17.899 * Looking for test storage... 00:25:17.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:17.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.899 --rc genhtml_branch_coverage=1 00:25:17.899 --rc genhtml_function_coverage=1 00:25:17.899 --rc genhtml_legend=1 00:25:17.899 --rc geninfo_all_blocks=1 00:25:17.899 --rc geninfo_unexecuted_blocks=1 00:25:17.899 00:25:17.899 ' 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:17.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.899 --rc genhtml_branch_coverage=1 00:25:17.899 --rc genhtml_function_coverage=1 00:25:17.899 --rc genhtml_legend=1 00:25:17.899 --rc geninfo_all_blocks=1 00:25:17.899 --rc geninfo_unexecuted_blocks=1 00:25:17.899 00:25:17.899 ' 00:25:17.899 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:17.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.900 --rc genhtml_branch_coverage=1 00:25:17.900 --rc genhtml_function_coverage=1 00:25:17.900 --rc genhtml_legend=1 00:25:17.900 --rc geninfo_all_blocks=1 00:25:17.900 --rc geninfo_unexecuted_blocks=1 00:25:17.900 00:25:17.900 ' 00:25:17.900 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:17.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.900 --rc genhtml_branch_coverage=1 00:25:17.900 --rc genhtml_function_coverage=1 00:25:17.900 --rc genhtml_legend=1 00:25:17.900 --rc geninfo_all_blocks=1 00:25:17.900 --rc geninfo_unexecuted_blocks=1 00:25:17.900 00:25:17.900 ' 00:25:17.900 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.161 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:18.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:18.162 09:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:26.337 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:26.337 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:26.337 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:26.337 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:26.337 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:26.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:25:26.338 00:25:26.338 --- 10.0.0.2 ping statistics --- 00:25:26.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.338 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:25:26.338 00:25:26.338 --- 10.0.0.1 ping statistics --- 00:25:26.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.338 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=815128 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 815128 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 815128 ']' 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:26.338 09:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.338 [2024-11-20 09:10:51.028237] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:25:26.338 [2024-11-20 09:10:51.028302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.338 [2024-11-20 09:10:51.128307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.338 [2024-11-20 09:10:51.177938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.338 [2024-11-20 09:10:51.177993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.338 [2024-11-20 09:10:51.178001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.338 [2024-11-20 09:10:51.178009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.338 [2024-11-20 09:10:51.178015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.338 [2024-11-20 09:10:51.178786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.338 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.338 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:26.338 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:26.338 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:26.338 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.600 [2024-11-20 09:10:51.904069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.600 [2024-11-20 09:10:51.916410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.600 null0 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.600 null1 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=815402 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 815402 /tmp/host.sock 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 815402 ']' 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:26.600 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:26.601 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:26.601 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:26.601 09:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.601 [2024-11-20 09:10:52.025045] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:25:26.601 [2024-11-20 09:10:52.025115] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815402 ] 00:25:26.601 [2024-11-20 09:10:52.117653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.862 [2024-11-20 09:10:52.170210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.435 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.697 09:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.697 [2024-11-20 09:10:53.163593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.697 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.698 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.698 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.698 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.698 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.698 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:27.698 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:27.960 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:27.961 09:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:28.533 [2024-11-20 09:10:53.862720] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:28.533 [2024-11-20 09:10:53.862740] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:28.533 [2024-11-20 09:10:53.862753] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:28.533 [2024-11-20 09:10:53.950028] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:28.793 [2024-11-20 09:10:54.174484] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:28.793 [2024-11-20 09:10:54.175576] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb96780:1 started. 00:25:28.793 [2024-11-20 09:10:54.177200] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:28.793 [2024-11-20 09:10:54.177219] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:28.793 [2024-11-20 09:10:54.182219] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb96780 was disconnected and freed. delete nvme_qpair. 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.054 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.055 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:29.316 [2024-11-20 09:10:54.608905] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb96b20:1 started. 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.316 [2024-11-20 09:10:54.612636] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb96b20 was disconnected and freed. delete nvme_qpair. 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:29.316 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.317 [2024-11-20 09:10:54.711424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:29.317 [2024-11-20 09:10:54.712065] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:29.317 [2024-11-20 09:10:54.712085] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.317 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:29.317 [2024-11-20 09:10:54.839474] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:29.577 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.577 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:29.577 09:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:29.577 [2024-11-20 09:10:54.939237] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:29.577 [2024-11-20 09:10:54.939273] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:29.577 [2024-11-20 09:10:54.939282] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:29.577 [2024-11-20 09:10:54.939287] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.521 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.521 [2024-11-20 09:10:55.967260] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:30.521 [2024-11-20 09:10:55.967281] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:30.521 [2024-11-20 09:10:55.968862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.521 [2024-11-20 09:10:55.968879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.521 [2024-11-20 09:10:55.968888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.521 [2024-11-20 09:10:55.968895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.521 [2024-11-20 09:10:55.968904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.521 [2024-11-20 09:10:55.968911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.522 [2024-11-20 09:10:55.968919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.522 [2024-11-20 09:10:55.968926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.522 [2024-11-20 09:10:55.968939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66e10 is same with the state(6) to be set 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.522 [2024-11-20 09:10:55.978876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66e10 (9): Bad file descriptor 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.522 [2024-11-20 09:10:55.988912] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:30.522 [2024-11-20 09:10:55.988926] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:30.522 [2024-11-20 09:10:55.988931] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:30.522 [2024-11-20 09:10:55.988937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:30.522 [2024-11-20 09:10:55.988955] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:30.522 [2024-11-20 09:10:55.989379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.522 [2024-11-20 09:10:55.989417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66e10 with addr=10.0.0.2, port=4420 00:25:30.522 [2024-11-20 09:10:55.989427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66e10 is same with the state(6) to be set 00:25:30.522 [2024-11-20 09:10:55.989446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66e10 (9): Bad file descriptor 00:25:30.522 [2024-11-20 09:10:55.989471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:30.522 [2024-11-20 09:10:55.989479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:30.522 [2024-11-20 09:10:55.989488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:30.522 [2024-11-20 09:10:55.989496] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:30.522 [2024-11-20 09:10:55.989502] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:30.522 [2024-11-20 09:10:55.989506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:30.522 09:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.522 [2024-11-20 09:10:55.998987] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:30.522 [2024-11-20 09:10:55.999006] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:30.522 [2024-11-20 09:10:55.999012] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:30.522 [2024-11-20 09:10:55.999016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:30.522 [2024-11-20 09:10:55.999033] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:30.522 [2024-11-20 09:10:55.999358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.522 [2024-11-20 09:10:55.999372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66e10 with addr=10.0.0.2, port=4420 00:25:30.522 [2024-11-20 09:10:55.999380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66e10 is same with the state(6) to be set 00:25:30.522 [2024-11-20 09:10:55.999392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66e10 (9): Bad file descriptor 00:25:30.522 [2024-11-20 09:10:55.999403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:30.522 [2024-11-20 09:10:55.999409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:30.522 [2024-11-20 09:10:55.999417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:30.522 [2024-11-20 09:10:55.999423] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:30.522 [2024-11-20 09:10:55.999428] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:30.522 [2024-11-20 09:10:55.999432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:30.522 [2024-11-20 09:10:56.009065] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:30.522 [2024-11-20 09:10:56.009076] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:30.522 [2024-11-20 09:10:56.009081] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:30.522 [2024-11-20 09:10:56.009086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:30.522 [2024-11-20 09:10:56.009100] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:30.522 [2024-11-20 09:10:56.009384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.522 [2024-11-20 09:10:56.009396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66e10 with addr=10.0.0.2, port=4420 00:25:30.522 [2024-11-20 09:10:56.009404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66e10 is same with the state(6) to be set 00:25:30.522 [2024-11-20 09:10:56.009415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66e10 (9): Bad file descriptor 00:25:30.522 [2024-11-20 09:10:56.009426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:30.522 [2024-11-20 09:10:56.009432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:30.522 [2024-11-20 09:10:56.009440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:30.522 [2024-11-20 09:10:56.009446] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:30.522 [2024-11-20 09:10:56.009451] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:30.522 [2024-11-20 09:10:56.009455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:30.522 [2024-11-20 09:10:56.019133] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:30.522 [2024-11-20 09:10:56.019148] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:30.522 [2024-11-20 09:10:56.019153] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:30.522 [2024-11-20 09:10:56.019161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:30.522 [2024-11-20 09:10:56.019177] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:30.522 [2024-11-20 09:10:56.019490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.522 [2024-11-20 09:10:56.019503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66e10 with addr=10.0.0.2, port=4420 00:25:30.522 [2024-11-20 09:10:56.019511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66e10 is same with the state(6) to be set 00:25:30.522 [2024-11-20 09:10:56.019522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66e10 (9): Bad file descriptor 00:25:30.522 [2024-11-20 09:10:56.019534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:30.522 [2024-11-20 09:10:56.019540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:30.522 [2024-11-20 09:10:56.019547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:30.522 [2024-11-20 09:10:56.019553] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:30.522 [2024-11-20 09:10:56.019560] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:30.522 [2024-11-20 09:10:56.019567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:30.522 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.522 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:30.522 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.522 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.522 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:30.522 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:30.522 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:30.522 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:30.522 [2024-11-20 09:10:56.029208] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:30.522 [2024-11-20 09:10:56.029221] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:30.522 [2024-11-20 09:10:56.029226] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:30.522 [2024-11-20 09:10:56.029231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:30.522 [2024-11-20 09:10:56.029245] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:30.522 [2024-11-20 09:10:56.029539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.523 [2024-11-20 09:10:56.029550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66e10 with addr=10.0.0.2, port=4420 00:25:30.523 [2024-11-20 09:10:56.029558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66e10 is same with the state(6) to be set 00:25:30.523 [2024-11-20 09:10:56.029573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66e10 (9): Bad file descriptor 00:25:30.523 [2024-11-20 09:10:56.029583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:30.523 [2024-11-20 09:10:56.029589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:30.523 [2024-11-20 09:10:56.029596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:30.523 [2024-11-20 09:10:56.029602] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:30.523 [2024-11-20 09:10:56.029607] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:30.523 [2024-11-20 09:10:56.029612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:30.523 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.523 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.523 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.523 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.523 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.523 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.523 [2024-11-20 09:10:56.039276] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:30.523 [2024-11-20 09:10:56.039291] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:30.523 [2024-11-20 09:10:56.039296] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:30.523 [2024-11-20 09:10:56.039301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:30.523 [2024-11-20 09:10:56.039316] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:30.523 [2024-11-20 09:10:56.039599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.523 [2024-11-20 09:10:56.039610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66e10 with addr=10.0.0.2, port=4420 00:25:30.523 [2024-11-20 09:10:56.039618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66e10 is same with the state(6) to be set 00:25:30.523 [2024-11-20 09:10:56.039629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66e10 (9): Bad file descriptor 00:25:30.523 [2024-11-20 09:10:56.039639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:30.523 [2024-11-20 09:10:56.039646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:30.523 [2024-11-20 09:10:56.039653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:30.523 [2024-11-20 09:10:56.039659] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:30.523 [2024-11-20 09:10:56.039663] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:30.523 [2024-11-20 09:10:56.039668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:30.784 [2024-11-20 09:10:56.049347] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:30.784 [2024-11-20 09:10:56.049359] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:30.784 [2024-11-20 09:10:56.049364] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:30.784 [2024-11-20 09:10:56.049375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:30.784 [2024-11-20 09:10:56.049389] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:30.784 [2024-11-20 09:10:56.049670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.784 [2024-11-20 09:10:56.049681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66e10 with addr=10.0.0.2, port=4420 00:25:30.784 [2024-11-20 09:10:56.049688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66e10 is same with the state(6) to be set 00:25:30.784 [2024-11-20 09:10:56.049699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66e10 (9): Bad file descriptor 00:25:30.784 [2024-11-20 09:10:56.049709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:30.784 [2024-11-20 09:10:56.049716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:30.784 [2024-11-20 09:10:56.049723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:30.784 [2024-11-20 09:10:56.049729] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:30.784 [2024-11-20 09:10:56.049733] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:30.784 [2024-11-20 09:10:56.049738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:30.784 [2024-11-20 09:10:56.054942] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:30.784 [2024-11-20 09:10:56.054961] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:30.784 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.785 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.045 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:31.045 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:31.045 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:31.045 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:31.045 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.045 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.045 09:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.984 [2024-11-20 09:10:57.393136] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:31.984 [2024-11-20 09:10:57.393151] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:31.984 [2024-11-20 09:10:57.393165] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:32.244 [2024-11-20 09:10:57.520537] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:32.244 [2024-11-20 09:10:57.624218] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:32.244 [2024-11-20 09:10:57.624737] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xb645b0:1 started. 00:25:32.244 [2024-11-20 09:10:57.626107] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:32.244 [2024-11-20 09:10:57.626130] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:32.244 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.244 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:32.244 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.245 request: 00:25:32.245 { 00:25:32.245 "name": "nvme", 00:25:32.245 "trtype": "tcp", 00:25:32.245 "traddr": "10.0.0.2", 00:25:32.245 "adrfam": "ipv4", 00:25:32.245 "trsvcid": "8009", 00:25:32.245 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:32.245 "wait_for_attach": true, 00:25:32.245 "method": "bdev_nvme_start_discovery", 00:25:32.245 "req_id": 1 00:25:32.245 } 00:25:32.245 Got JSON-RPC error response 00:25:32.245 response: 00:25:32.245 { 00:25:32.245 "code": -17, 00:25:32.245 "message": "File exists" 00:25:32.245 } 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.245 [2024-11-20 09:10:57.671675] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xb645b0 was disconnected and freed. delete nvme_qpair. 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.245 request: 00:25:32.245 { 00:25:32.245 "name": "nvme_second", 00:25:32.245 "trtype": "tcp", 00:25:32.245 "traddr": "10.0.0.2", 00:25:32.245 "adrfam": "ipv4", 00:25:32.245 "trsvcid": "8009", 00:25:32.245 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:32.245 "wait_for_attach": true, 00:25:32.245 "method": "bdev_nvme_start_discovery", 00:25:32.245 "req_id": 1 00:25:32.245 } 00:25:32.245 Got JSON-RPC error response 00:25:32.245 response: 00:25:32.245 { 00:25:32.245 "code": -17, 00:25:32.245 "message": "File exists" 00:25:32.245 } 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:32.245 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.505 09:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.446 [2024-11-20 09:10:58.885519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.446 [2024-11-20 09:10:58.885542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2910 with addr=10.0.0.2, port=8010 00:25:33.446 [2024-11-20 09:10:58.885552] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:33.446 [2024-11-20 09:10:58.885557] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:33.446 [2024-11-20 09:10:58.885563] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:34.386 [2024-11-20 09:10:59.887784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.386 [2024-11-20 09:10:59.887802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2910 with addr=10.0.0.2, port=8010 00:25:34.386 [2024-11-20 09:10:59.887811] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:34.386 [2024-11-20 09:10:59.887816] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:34.386 [2024-11-20 09:10:59.887820] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:35.768 [2024-11-20 09:11:00.889900] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:35.768 request: 00:25:35.768 { 00:25:35.768 "name": "nvme_second", 00:25:35.768 "trtype": "tcp", 00:25:35.768 "traddr": "10.0.0.2", 00:25:35.768 "adrfam": "ipv4", 00:25:35.768 "trsvcid": "8010", 00:25:35.768 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:35.768 "wait_for_attach": false, 00:25:35.768 "attach_timeout_ms": 3000, 00:25:35.768 "method": "bdev_nvme_start_discovery", 00:25:35.768 "req_id": 1 00:25:35.768 } 00:25:35.768 Got JSON-RPC error response 00:25:35.768 response: 00:25:35.768 { 00:25:35.768 "code": -110, 00:25:35.768 "message": "Connection timed out" 00:25:35.768 } 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 815402 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.768 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.768 rmmod nvme_tcp 00:25:35.768 rmmod nvme_fabrics 00:25:35.768 rmmod nvme_keyring 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 815128 ']' 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 815128 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 815128 ']' 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 815128 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 815128 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 815128' 00:25:35.768 killing process with pid 815128 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 815128 00:25:35.768 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 815128 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.769 09:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:38.315 00:25:38.315 real 0m20.036s 00:25:38.315 user 0m23.023s 00:25:38.315 sys 0m7.184s 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.315 ************************************ 00:25:38.315 END TEST nvmf_host_discovery 00:25:38.315 ************************************ 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.315 ************************************ 00:25:38.315 START TEST nvmf_host_multipath_status 00:25:38.315 ************************************ 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:38.315 * Looking for test storage... 00:25:38.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:38.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.315 --rc genhtml_branch_coverage=1 00:25:38.315 --rc genhtml_function_coverage=1 00:25:38.315 --rc genhtml_legend=1 00:25:38.315 --rc geninfo_all_blocks=1 00:25:38.315 --rc geninfo_unexecuted_blocks=1 00:25:38.315 00:25:38.315 ' 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:38.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.315 --rc genhtml_branch_coverage=1 00:25:38.315 --rc genhtml_function_coverage=1 00:25:38.315 --rc genhtml_legend=1 00:25:38.315 --rc geninfo_all_blocks=1 00:25:38.315 --rc geninfo_unexecuted_blocks=1 00:25:38.315 00:25:38.315 ' 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:38.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.315 --rc genhtml_branch_coverage=1 00:25:38.315 --rc genhtml_function_coverage=1 00:25:38.315 --rc genhtml_legend=1 00:25:38.315 --rc geninfo_all_blocks=1 00:25:38.315 --rc geninfo_unexecuted_blocks=1 00:25:38.315 00:25:38.315 ' 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:38.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.315 --rc genhtml_branch_coverage=1 00:25:38.315 --rc genhtml_function_coverage=1 00:25:38.315 --rc genhtml_legend=1 00:25:38.315 --rc geninfo_all_blocks=1 00:25:38.315 --rc geninfo_unexecuted_blocks=1 00:25:38.315 00:25:38.315 ' 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.315 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:38.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:38.316 09:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:46.459 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:46.460 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:46.460 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:46.460 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:46.460 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.460 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.461 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.461 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.461 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:46.461 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:46.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:25:46.461 00:25:46.461 --- 10.0.0.2 ping statistics --- 00:25:46.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.461 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:25:46.461 00:25:46.461 --- 10.0.0.1 ping statistics --- 00:25:46.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.461 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=821345 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 821345 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 821345 ']' 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:46.461 [2024-11-20 09:11:11.170109] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:25:46.461 [2024-11-20 09:11:11.170197] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.461 [2024-11-20 09:11:11.271394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:46.461 [2024-11-20 09:11:11.322839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.461 [2024-11-20 09:11:11.322888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.461 [2024-11-20 09:11:11.322897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.461 [2024-11-20 09:11:11.322904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.461 [2024-11-20 09:11:11.322911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.461 [2024-11-20 09:11:11.324681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.461 [2024-11-20 09:11:11.324685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:46.461 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:46.723 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.723 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=821345 00:25:46.723 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:46.723 [2024-11-20 09:11:12.181008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.723 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:46.984 Malloc0 00:25:46.984 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:47.245 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:47.506 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.506 [2024-11-20 09:11:12.992450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.506 09:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:47.768 [2024-11-20 09:11:13.192982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:47.768 09:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=821829 00:25:47.769 09:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:47.769 09:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:47.769 09:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 821829 /var/tmp/bdevperf.sock 00:25:47.769 09:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 821829 ']' 00:25:47.769 09:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.769 09:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.769 09:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.769 09:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.769 09:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.712 09:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:48.712 09:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:48.712 09:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:48.973 09:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:49.235 Nvme0n1 00:25:49.235 09:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:49.808 Nvme0n1 00:25:49.808 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:49.808 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:51.720 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:51.720 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:51.981 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:51.981 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.367 09:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.628 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.628 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.628 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.628 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.897 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.897 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:53.897 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.897 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.897 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.897 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:53.897 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.897 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.160 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.160 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:54.160 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:54.420 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:54.680 09:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:55.622 09:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:55.622 09:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:55.622 09:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.622 09:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.622 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.622 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:55.622 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.623 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.884 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.884 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.884 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.884 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.145 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.145 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.145 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.145 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.407 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.407 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.407 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.407 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.407 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.407 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.407 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.407 09:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.668 09:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.668 09:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:56.668 09:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:56.929 09:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:56.929 09:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.314 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.575 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.575 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.575 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.575 09:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.836 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.836 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.836 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.836 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.836 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.836 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:58.836 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.836 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.096 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.096 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:59.096 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:59.356 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:59.356 09:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:00.740 09:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:00.740 09:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:00.740 09:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.740 09:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.740 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.740 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:00.740 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.740 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.740 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.740 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.740 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.740 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.000 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.000 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.000 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.000 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.263 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.263 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.263 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.263 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.525 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.525 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:01.525 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.525 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.525 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.525 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:01.525 09:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:01.786 09:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:02.047 09:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:02.987 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:02.987 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:02.987 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.987 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.254 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.254 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:03.254 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.254 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.254 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.254 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.254 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.254 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.514 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.514 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.514 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.514 09:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.774 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.774 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:03.774 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.774 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.774 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.774 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:03.774 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.774 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.034 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.034 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:04.034 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:04.294 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:04.294 09:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:05.680 09:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:05.680 09:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:05.680 09:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.680 09:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.680 09:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.680 09:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:05.680 09:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.680 09:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.680 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.680 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.680 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.680 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.940 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.940 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.940 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.940 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.199 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.199 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:06.199 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.199 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.199 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.199 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:06.199 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.199 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.460 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.460 09:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:06.722 09:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:06.722 09:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:06.981 09:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:06.981 09:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.363 09:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.625 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.625 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.625 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.625 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.886 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.886 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.886 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.886 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.886 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.886 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.886 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.886 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:09.147 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.147 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:09.147 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:09.407 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:09.667 09:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:10.609 09:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:10.609 09:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:10.609 09:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.609 09:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.609 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.609 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:10.609 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.609 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.870 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.870 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.870 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.870 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.130 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.130 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.130 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.130 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.390 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.390 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.390 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.390 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.390 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.390 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.390 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.390 09:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.651 09:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.651 09:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:11.651 09:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.928 09:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:11.928 09:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:12.977 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:12.977 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:12.977 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.977 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.238 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.238 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:13.238 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.238 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.499 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.499 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.499 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.499 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.499 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.499 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.499 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.499 09:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.761 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.761 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.761 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.761 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.022 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.022 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:14.023 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.023 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.023 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.023 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:14.023 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:14.283 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:14.544 09:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:15.488 09:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:15.488 09:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:15.488 09:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.488 09:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.749 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.749 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:15.749 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.749 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.749 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.749 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.749 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.749 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:16.010 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.010 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.010 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.010 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.271 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.271 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:16.271 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.271 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.271 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.271 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:16.533 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.533 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.533 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.533 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 821829 00:26:16.533 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 821829 ']' 00:26:16.533 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 821829 00:26:16.533 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:16.533 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:16.533 09:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821829 00:26:16.533 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:16.533 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:16.533 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821829' 00:26:16.533 killing process with pid 821829 00:26:16.533 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 821829 00:26:16.533 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 821829 00:26:16.798 { 00:26:16.798 "results": [ 00:26:16.798 { 00:26:16.798 "job": "Nvme0n1", 00:26:16.798 "core_mask": "0x4", 00:26:16.798 "workload": "verify", 00:26:16.798 "status": "terminated", 00:26:16.798 "verify_range": { 00:26:16.798 "start": 0, 00:26:16.798 "length": 16384 00:26:16.798 }, 00:26:16.798 "queue_depth": 128, 00:26:16.798 "io_size": 4096, 00:26:16.798 "runtime": 26.815792, 00:26:16.798 "iops": 12094.925258966807, 00:26:16.798 "mibps": 47.24580179283909, 00:26:16.798 "io_failed": 0, 00:26:16.798 "io_timeout": 0, 00:26:16.798 "avg_latency_us": 10563.907055606085, 00:26:16.798 "min_latency_us": 320.85333333333335, 00:26:16.798 "max_latency_us": 3019898.88 00:26:16.798 } 00:26:16.798 ], 00:26:16.798 "core_count": 1 00:26:16.798 } 00:26:16.798 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 821829 00:26:16.798 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:16.798 [2024-11-20 09:11:13.281668] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:26:16.798 [2024-11-20 09:11:13.281752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821829 ] 00:26:16.798 [2024-11-20 09:11:13.374116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.798 [2024-11-20 09:11:13.427476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.798 Running I/O for 90 seconds... 00:26:16.798 10574.00 IOPS, 41.30 MiB/s [2024-11-20T08:11:42.327Z] 10962.50 IOPS, 42.82 MiB/s [2024-11-20T08:11:42.327Z] 11092.67 IOPS, 43.33 MiB/s [2024-11-20T08:11:42.327Z] 11440.75 IOPS, 44.69 MiB/s [2024-11-20T08:11:42.327Z] 11780.40 IOPS, 46.02 MiB/s [2024-11-20T08:11:42.327Z] 11993.67 IOPS, 46.85 MiB/s [2024-11-20T08:11:42.327Z] 12132.29 IOPS, 47.39 MiB/s [2024-11-20T08:11:42.327Z] 12260.75 IOPS, 47.89 MiB/s [2024-11-20T08:11:42.327Z] 12347.78 IOPS, 48.23 MiB/s [2024-11-20T08:11:42.327Z] 12398.40 IOPS, 48.43 MiB/s [2024-11-20T08:11:42.327Z] 12451.09 IOPS, 48.64 MiB/s [2024-11-20T08:11:42.327Z] [2024-11-20 09:11:27.131856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.798 [2024-11-20 09:11:27.131888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:16.798 [2024-11-20 09:11:27.131922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.798 [2024-11-20 09:11:27.131929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:16.798 [2024-11-20 09:11:27.131940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.798 [2024-11-20 09:11:27.131946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:16.798 [2024-11-20 09:11:27.131956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.798 [2024-11-20 09:11:27.131961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:16.798 [2024-11-20 09:11:27.131971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.798 [2024-11-20 09:11:27.131977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:16.798 [2024-11-20 09:11:27.131987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.131992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.132990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.132995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.133006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.133012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.133023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.133028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.133040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.133045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.133056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.133061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:16.799 [2024-11-20 09:11:27.133073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.799 [2024-11-20 09:11:27.133078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.800 [2024-11-20 09:11:27.133498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.800 [2024-11-20 09:11:27.133871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:16.800 [2024-11-20 09:11:27.133885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.133891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.133906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.133911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.133925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.133931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.133945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.133950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.133964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.133969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.133983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.133988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.801 [2024-11-20 09:11:27.134699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.801 [2024-11-20 09:11:27.134734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:27.134740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:27.134757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:27.134762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:27.134778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:27.134783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:27.134799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:27.134804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:27.134820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:27.134825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:27.134841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:27.134846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:27.134862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:27.134867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:27.134883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:27.134888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:16.802 12414.08 IOPS, 48.49 MiB/s [2024-11-20T08:11:42.331Z] 11459.15 IOPS, 44.76 MiB/s [2024-11-20T08:11:42.331Z] 10640.64 IOPS, 41.57 MiB/s [2024-11-20T08:11:42.331Z] 10002.27 IOPS, 39.07 MiB/s [2024-11-20T08:11:42.331Z] 10185.19 IOPS, 39.79 MiB/s [2024-11-20T08:11:42.331Z] 10364.06 IOPS, 40.48 MiB/s [2024-11-20T08:11:42.331Z] 10722.78 IOPS, 41.89 MiB/s [2024-11-20T08:11:42.331Z] 11057.32 IOPS, 43.19 MiB/s [2024-11-20T08:11:42.331Z] 11281.30 IOPS, 44.07 MiB/s [2024-11-20T08:11:42.331Z] 11365.33 IOPS, 44.40 MiB/s [2024-11-20T08:11:42.331Z] 11445.59 IOPS, 44.71 MiB/s [2024-11-20T08:11:42.331Z] 11667.00 IOPS, 45.57 MiB/s [2024-11-20T08:11:42.331Z] 11896.12 IOPS, 46.47 MiB/s [2024-11-20T08:11:42.331Z] [2024-11-20 09:11:39.877073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.802 [2024-11-20 09:11:39.877694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:16.802 [2024-11-20 09:11:39.877704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.877985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.877990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.878001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.878006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.878016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.878021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.878031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.878036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.878046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.878051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.878061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.878066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.878076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.878081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.878091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.878096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.878107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.878112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.879123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.879134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.879146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.879151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.879166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.879171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.879182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.879187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.879197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.803 [2024-11-20 09:11:39.879203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:16.803 [2024-11-20 09:11:39.879213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.804 [2024-11-20 09:11:39.879218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.804 [2024-11-20 09:11:39.879228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.804 [2024-11-20 09:11:39.879234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:16.804 [2024-11-20 09:11:39.879244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.804 [2024-11-20 09:11:39.879249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:16.804 [2024-11-20 09:11:39.879260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.804 [2024-11-20 09:11:39.879265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:16.804 [2024-11-20 09:11:39.879275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.804 [2024-11-20 09:11:39.879281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:16.804 [2024-11-20 09:11:39.879291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.804 [2024-11-20 09:11:39.879296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:16.804 [2024-11-20 09:11:39.879307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.804 [2024-11-20 09:11:39.879312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:16.804 12039.72 IOPS, 47.03 MiB/s [2024-11-20T08:11:42.333Z] 12072.08 IOPS, 47.16 MiB/s [2024-11-20T08:11:42.333Z] Received shutdown signal, test time was about 26.816404 seconds 00:26:16.804 00:26:16.804 Latency(us) 00:26:16.804 [2024-11-20T08:11:42.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.804 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:16.804 Verification LBA range: start 0x0 length 0x4000 00:26:16.804 Nvme0n1 : 26.82 12094.93 47.25 0.00 0.00 10563.91 320.85 3019898.88 00:26:16.804 [2024-11-20T08:11:42.333Z] =================================================================================================================== 00:26:16.804 [2024-11-20T08:11:42.333Z] Total : 12094.93 47.25 0.00 0.00 10563.91 320.85 3019898.88 00:26:16.804 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:17.066 rmmod nvme_tcp 00:26:17.066 rmmod nvme_fabrics 00:26:17.066 rmmod nvme_keyring 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 821345 ']' 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 821345 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 821345 ']' 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 821345 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821345 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821345' 00:26:17.066 killing process with pid 821345 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 821345 00:26:17.066 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 821345 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.328 09:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.241 09:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:19.241 00:26:19.241 real 0m41.341s 00:26:19.241 user 1m46.864s 00:26:19.241 sys 0m11.659s 00:26:19.241 09:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.241 09:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:19.241 ************************************ 00:26:19.241 END TEST nvmf_host_multipath_status 00:26:19.241 ************************************ 00:26:19.241 09:11:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:19.241 09:11:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:19.241 09:11:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.241 09:11:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.241 ************************************ 00:26:19.241 START TEST nvmf_discovery_remove_ifc 00:26:19.241 ************************************ 00:26:19.241 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:19.503 * Looking for test storage... 00:26:19.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:19.503 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:19.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.504 --rc genhtml_branch_coverage=1 00:26:19.504 --rc genhtml_function_coverage=1 00:26:19.504 --rc genhtml_legend=1 00:26:19.504 --rc geninfo_all_blocks=1 00:26:19.504 --rc geninfo_unexecuted_blocks=1 00:26:19.504 00:26:19.504 ' 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:19.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.504 --rc genhtml_branch_coverage=1 00:26:19.504 --rc genhtml_function_coverage=1 00:26:19.504 --rc genhtml_legend=1 00:26:19.504 --rc geninfo_all_blocks=1 00:26:19.504 --rc geninfo_unexecuted_blocks=1 00:26:19.504 00:26:19.504 ' 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:19.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.504 --rc genhtml_branch_coverage=1 00:26:19.504 --rc genhtml_function_coverage=1 00:26:19.504 --rc genhtml_legend=1 00:26:19.504 --rc geninfo_all_blocks=1 00:26:19.504 --rc geninfo_unexecuted_blocks=1 00:26:19.504 00:26:19.504 ' 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:19.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.504 --rc genhtml_branch_coverage=1 00:26:19.504 --rc genhtml_function_coverage=1 00:26:19.504 --rc genhtml_legend=1 00:26:19.504 --rc geninfo_all_blocks=1 00:26:19.504 --rc geninfo_unexecuted_blocks=1 00:26:19.504 00:26:19.504 ' 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.504 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:19.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:19.505 09:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:19.505 09:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:27.645 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:27.645 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.645 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:27.646 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:27.646 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:27.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:26:27.646 00:26:27.646 --- 10.0.0.2 ping statistics --- 00:26:27.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.646 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:26:27.646 00:26:27.646 --- 10.0.0.1 ping statistics --- 00:26:27.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.646 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=831898 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 831898 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 831898 ']' 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.646 [2024-11-20 09:11:52.549343] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:26:27.646 [2024-11-20 09:11:52.549410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.646 [2024-11-20 09:11:52.623985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.646 [2024-11-20 09:11:52.669424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.646 [2024-11-20 09:11:52.669470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.646 [2024-11-20 09:11:52.669477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.646 [2024-11-20 09:11:52.669482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.646 [2024-11-20 09:11:52.669487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.646 [2024-11-20 09:11:52.670187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.646 [2024-11-20 09:11:52.832821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.646 [2024-11-20 09:11:52.841094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:27.646 null0 00:26:27.646 [2024-11-20 09:11:52.873044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.646 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.647 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=831928 00:26:27.647 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 831928 /tmp/host.sock 00:26:27.647 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:27.647 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 831928 ']' 00:26:27.647 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:27.647 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.647 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:27.647 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:27.647 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.647 09:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.647 [2024-11-20 09:11:52.960780] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:26:27.647 [2024-11-20 09:11:52.960839] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831928 ] 00:26:27.647 [2024-11-20 09:11:53.052709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.647 [2024-11-20 09:11:53.104988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.592 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.592 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:28.592 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:28.593 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:28.593 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.593 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.593 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.593 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:28.593 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.593 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.593 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.593 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:28.593 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.593 09:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.536 [2024-11-20 09:11:54.923387] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:29.536 [2024-11-20 09:11:54.923418] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:29.536 [2024-11-20 09:11:54.923434] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:29.536 [2024-11-20 09:11:55.051845] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:29.797 [2024-11-20 09:11:55.235324] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:29.797 [2024-11-20 09:11:55.236353] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb6c3f0:1 started. 00:26:29.797 [2024-11-20 09:11:55.237969] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:29.797 [2024-11-20 09:11:55.238018] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:29.797 [2024-11-20 09:11:55.238045] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:29.797 [2024-11-20 09:11:55.238061] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:29.797 [2024-11-20 09:11:55.238082] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:29.797 [2024-11-20 09:11:55.241130] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb6c3f0 was disconnected and freed. delete nvme_qpair. 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:29.797 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:30.058 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:30.058 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.058 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.058 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.058 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.058 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.058 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.058 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.058 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.058 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:30.058 09:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.999 09:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.999 09:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.999 09:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.999 09:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.999 09:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.999 09:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.999 09:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.999 09:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.260 09:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:31.260 09:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:32.202 09:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.202 09:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.202 09:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.202 09:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.202 09:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.202 09:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.202 09:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.202 09:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.202 09:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:32.202 09:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.147 09:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.147 09:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.147 09:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.147 09:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.147 09:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.147 09:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.147 09:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.147 09:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.147 09:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.147 09:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.534 09:11:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.534 09:11:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.534 09:11:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.534 09:11:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.534 09:11:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.534 09:11:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.534 09:11:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.534 09:11:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.534 09:11:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.534 09:11:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.476 [2024-11-20 09:12:00.678530] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:35.476 [2024-11-20 09:12:00.678562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.476 [2024-11-20 09:12:00.678572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.476 [2024-11-20 09:12:00.678582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.476 [2024-11-20 09:12:00.678588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.476 [2024-11-20 09:12:00.678594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.476 [2024-11-20 09:12:00.678599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.476 [2024-11-20 09:12:00.678604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.476 [2024-11-20 09:12:00.678609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.476 [2024-11-20 09:12:00.678615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.476 [2024-11-20 09:12:00.678620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.476 [2024-11-20 09:12:00.678626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c00 is same with the state(6) to be set 00:26:35.476 [2024-11-20 09:12:00.688552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb48c00 (9): Bad file descriptor 00:26:35.476 [2024-11-20 09:12:00.698585] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:35.476 [2024-11-20 09:12:00.698594] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:35.476 [2024-11-20 09:12:00.698598] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:35.476 [2024-11-20 09:12:00.698604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:35.476 [2024-11-20 09:12:00.698619] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:35.476 09:12:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.476 09:12:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.476 09:12:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.476 09:12:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.476 09:12:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.476 09:12:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.476 09:12:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.419 [2024-11-20 09:12:01.755254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:36.419 [2024-11-20 09:12:01.755351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48c00 with addr=10.0.0.2, port=4420 00:26:36.419 [2024-11-20 09:12:01.755383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c00 is same with the state(6) to be set 00:26:36.419 [2024-11-20 09:12:01.755442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb48c00 (9): Bad file descriptor 00:26:36.419 [2024-11-20 09:12:01.756567] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:36.419 [2024-11-20 09:12:01.756639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:36.419 [2024-11-20 09:12:01.756662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:36.419 [2024-11-20 09:12:01.756698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:36.419 [2024-11-20 09:12:01.756720] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:36.419 [2024-11-20 09:12:01.756737] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:36.419 [2024-11-20 09:12:01.756752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:36.419 [2024-11-20 09:12:01.756775] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:36.419 [2024-11-20 09:12:01.756790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:36.419 09:12:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.419 09:12:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:36.419 09:12:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.360 [2024-11-20 09:12:02.759215] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:37.360 [2024-11-20 09:12:02.759231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:37.360 [2024-11-20 09:12:02.759241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:37.360 [2024-11-20 09:12:02.759247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:37.360 [2024-11-20 09:12:02.759252] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:37.360 [2024-11-20 09:12:02.759258] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:37.360 [2024-11-20 09:12:02.759262] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:37.360 [2024-11-20 09:12:02.759266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:37.360 [2024-11-20 09:12:02.759285] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:37.360 [2024-11-20 09:12:02.759304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.360 [2024-11-20 09:12:02.759311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.360 [2024-11-20 09:12:02.759319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.360 [2024-11-20 09:12:02.759324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.360 [2024-11-20 09:12:02.759330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.360 [2024-11-20 09:12:02.759335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.360 [2024-11-20 09:12:02.759340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.360 [2024-11-20 09:12:02.759345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.360 [2024-11-20 09:12:02.759351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.360 [2024-11-20 09:12:02.759356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.360 [2024-11-20 09:12:02.759362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:37.360 [2024-11-20 09:12:02.759709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb38340 (9): Bad file descriptor 00:26:37.360 [2024-11-20 09:12:02.760719] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:37.360 [2024-11-20 09:12:02.760726] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:37.360 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.360 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.360 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.360 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.360 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.360 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.360 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.360 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.360 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:37.360 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.360 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.620 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:37.620 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.620 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.620 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.620 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.620 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.620 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.620 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.620 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.620 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:37.620 09:12:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.560 09:12:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.560 09:12:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.560 09:12:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.560 09:12:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.560 09:12:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.560 09:12:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 09:12:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.560 09:12:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.560 09:12:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:38.560 09:12:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.506 [2024-11-20 09:12:04.818348] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:39.506 [2024-11-20 09:12:04.818363] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:39.506 [2024-11-20 09:12:04.818373] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:39.506 [2024-11-20 09:12:04.906627] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:39.766 09:12:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.766 09:12:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.766 09:12:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.766 09:12:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.766 09:12:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.766 09:12:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.766 09:12:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.766 09:12:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.766 [2024-11-20 09:12:05.087536] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:39.766 [2024-11-20 09:12:05.088240] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xb3d130:1 started. 00:26:39.766 [2024-11-20 09:12:05.089130] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:39.766 [2024-11-20 09:12:05.089164] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:39.766 [2024-11-20 09:12:05.089180] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:39.766 [2024-11-20 09:12:05.089193] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:39.766 [2024-11-20 09:12:05.089199] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:39.766 09:12:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:39.766 09:12:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.766 [2024-11-20 09:12:05.095109] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xb3d130 was disconnected and freed. delete nvme_qpair. 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 831928 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 831928 ']' 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 831928 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 831928 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 831928' 00:26:40.708 killing process with pid 831928 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 831928 00:26:40.708 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 831928 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:40.968 rmmod nvme_tcp 00:26:40.968 rmmod nvme_fabrics 00:26:40.968 rmmod nvme_keyring 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 831898 ']' 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 831898 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 831898 ']' 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 831898 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 831898 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 831898' 00:26:40.968 killing process with pid 831898 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 831898 00:26:40.968 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 831898 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.229 09:12:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.138 09:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:43.138 00:26:43.138 real 0m23.875s 00:26:43.138 user 0m28.852s 00:26:43.138 sys 0m7.142s 00:26:43.138 09:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:43.138 09:12:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.138 ************************************ 00:26:43.138 END TEST nvmf_discovery_remove_ifc 00:26:43.138 ************************************ 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.399 ************************************ 00:26:43.399 START TEST nvmf_identify_kernel_target 00:26:43.399 ************************************ 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:43.399 * Looking for test storage... 00:26:43.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:43.399 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:43.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.400 --rc genhtml_branch_coverage=1 00:26:43.400 --rc genhtml_function_coverage=1 00:26:43.400 --rc genhtml_legend=1 00:26:43.400 --rc geninfo_all_blocks=1 00:26:43.400 --rc geninfo_unexecuted_blocks=1 00:26:43.400 00:26:43.400 ' 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:43.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.400 --rc genhtml_branch_coverage=1 00:26:43.400 --rc genhtml_function_coverage=1 00:26:43.400 --rc genhtml_legend=1 00:26:43.400 --rc geninfo_all_blocks=1 00:26:43.400 --rc geninfo_unexecuted_blocks=1 00:26:43.400 00:26:43.400 ' 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:43.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.400 --rc genhtml_branch_coverage=1 00:26:43.400 --rc genhtml_function_coverage=1 00:26:43.400 --rc genhtml_legend=1 00:26:43.400 --rc geninfo_all_blocks=1 00:26:43.400 --rc geninfo_unexecuted_blocks=1 00:26:43.400 00:26:43.400 ' 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:43.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.400 --rc genhtml_branch_coverage=1 00:26:43.400 --rc genhtml_function_coverage=1 00:26:43.400 --rc genhtml_legend=1 00:26:43.400 --rc geninfo_all_blocks=1 00:26:43.400 --rc geninfo_unexecuted_blocks=1 00:26:43.400 00:26:43.400 ' 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.400 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:43.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:43.662 09:12:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.924 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:51.925 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:51.925 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:51.925 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:51.925 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:51.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:26:51.925 00:26:51.925 --- 10.0.0.2 ping statistics --- 00:26:51.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.925 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:26:51.925 00:26:51.925 --- 10.0.0.1 ping statistics --- 00:26:51.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.925 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:51.925 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:51.926 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:51.926 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:51.926 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:51.926 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:51.926 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:51.926 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:51.926 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:51.926 09:12:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:54.479 Waiting for block devices as requested 00:26:54.479 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:54.740 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:54.740 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:54.740 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:54.999 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:54.999 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:54.999 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:54.999 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:55.260 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:55.521 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:55.521 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:55.521 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:55.783 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:55.783 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:55.783 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:55.783 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:56.044 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:56.305 No valid GPT data, bailing 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:56.305 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:56.568 00:26:56.568 Discovery Log Number of Records 2, Generation counter 2 00:26:56.568 =====Discovery Log Entry 0====== 00:26:56.568 trtype: tcp 00:26:56.568 adrfam: ipv4 00:26:56.568 subtype: current discovery subsystem 00:26:56.568 treq: not specified, sq flow control disable supported 00:26:56.568 portid: 1 00:26:56.568 trsvcid: 4420 00:26:56.568 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:56.568 traddr: 10.0.0.1 00:26:56.568 eflags: none 00:26:56.568 sectype: none 00:26:56.568 =====Discovery Log Entry 1====== 00:26:56.568 trtype: tcp 00:26:56.568 adrfam: ipv4 00:26:56.568 subtype: nvme subsystem 00:26:56.568 treq: not specified, sq flow control disable supported 00:26:56.568 portid: 1 00:26:56.568 trsvcid: 4420 00:26:56.568 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:56.568 traddr: 10.0.0.1 00:26:56.568 eflags: none 00:26:56.568 sectype: none 00:26:56.568 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:56.568 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:56.568 ===================================================== 00:26:56.568 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:56.568 ===================================================== 00:26:56.568 Controller Capabilities/Features 00:26:56.568 ================================ 00:26:56.568 Vendor ID: 0000 00:26:56.568 Subsystem Vendor ID: 0000 00:26:56.568 Serial Number: ba7df110c94821a0afea 00:26:56.568 Model Number: Linux 00:26:56.568 Firmware Version: 6.8.9-20 00:26:56.568 Recommended Arb Burst: 0 00:26:56.568 IEEE OUI Identifier: 00 00 00 00:26:56.568 Multi-path I/O 00:26:56.568 May have multiple subsystem ports: No 00:26:56.568 May have multiple controllers: No 00:26:56.568 Associated with SR-IOV VF: No 00:26:56.568 Max Data Transfer Size: Unlimited 00:26:56.568 Max Number of Namespaces: 0 00:26:56.568 Max Number of I/O Queues: 1024 00:26:56.568 NVMe Specification Version (VS): 1.3 00:26:56.568 NVMe Specification Version (Identify): 1.3 00:26:56.568 Maximum Queue Entries: 1024 00:26:56.568 Contiguous Queues Required: No 00:26:56.568 Arbitration Mechanisms Supported 00:26:56.568 Weighted Round Robin: Not Supported 00:26:56.568 Vendor Specific: Not Supported 00:26:56.568 Reset Timeout: 7500 ms 00:26:56.568 Doorbell Stride: 4 bytes 00:26:56.568 NVM Subsystem Reset: Not Supported 00:26:56.568 Command Sets Supported 00:26:56.568 NVM Command Set: Supported 00:26:56.568 Boot Partition: Not Supported 00:26:56.568 Memory Page Size Minimum: 4096 bytes 00:26:56.568 Memory Page Size Maximum: 4096 bytes 00:26:56.568 Persistent Memory Region: Not Supported 00:26:56.568 Optional Asynchronous Events Supported 00:26:56.568 Namespace Attribute Notices: Not Supported 00:26:56.568 Firmware Activation Notices: Not Supported 00:26:56.568 ANA Change Notices: Not Supported 00:26:56.568 PLE Aggregate Log Change Notices: Not Supported 00:26:56.568 LBA Status Info Alert Notices: Not Supported 00:26:56.568 EGE Aggregate Log Change Notices: Not Supported 00:26:56.568 Normal NVM Subsystem Shutdown event: Not Supported 00:26:56.568 Zone Descriptor Change Notices: Not Supported 00:26:56.568 Discovery Log Change Notices: Supported 00:26:56.568 Controller Attributes 00:26:56.568 128-bit Host Identifier: Not Supported 00:26:56.568 Non-Operational Permissive Mode: Not Supported 00:26:56.568 NVM Sets: Not Supported 00:26:56.568 Read Recovery Levels: Not Supported 00:26:56.568 Endurance Groups: Not Supported 00:26:56.568 Predictable Latency Mode: Not Supported 00:26:56.568 Traffic Based Keep ALive: Not Supported 00:26:56.568 Namespace Granularity: Not Supported 00:26:56.568 SQ Associations: Not Supported 00:26:56.568 UUID List: Not Supported 00:26:56.568 Multi-Domain Subsystem: Not Supported 00:26:56.568 Fixed Capacity Management: Not Supported 00:26:56.568 Variable Capacity Management: Not Supported 00:26:56.568 Delete Endurance Group: Not Supported 00:26:56.568 Delete NVM Set: Not Supported 00:26:56.568 Extended LBA Formats Supported: Not Supported 00:26:56.568 Flexible Data Placement Supported: Not Supported 00:26:56.568 00:26:56.568 Controller Memory Buffer Support 00:26:56.568 ================================ 00:26:56.568 Supported: No 00:26:56.568 00:26:56.568 Persistent Memory Region Support 00:26:56.568 ================================ 00:26:56.568 Supported: No 00:26:56.568 00:26:56.568 Admin Command Set Attributes 00:26:56.568 ============================ 00:26:56.568 Security Send/Receive: Not Supported 00:26:56.568 Format NVM: Not Supported 00:26:56.568 Firmware Activate/Download: Not Supported 00:26:56.568 Namespace Management: Not Supported 00:26:56.568 Device Self-Test: Not Supported 00:26:56.568 Directives: Not Supported 00:26:56.568 NVMe-MI: Not Supported 00:26:56.568 Virtualization Management: Not Supported 00:26:56.568 Doorbell Buffer Config: Not Supported 00:26:56.568 Get LBA Status Capability: Not Supported 00:26:56.568 Command & Feature Lockdown Capability: Not Supported 00:26:56.568 Abort Command Limit: 1 00:26:56.568 Async Event Request Limit: 1 00:26:56.568 Number of Firmware Slots: N/A 00:26:56.568 Firmware Slot 1 Read-Only: N/A 00:26:56.568 Firmware Activation Without Reset: N/A 00:26:56.568 Multiple Update Detection Support: N/A 00:26:56.568 Firmware Update Granularity: No Information Provided 00:26:56.568 Per-Namespace SMART Log: No 00:26:56.568 Asymmetric Namespace Access Log Page: Not Supported 00:26:56.568 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:56.568 Command Effects Log Page: Not Supported 00:26:56.568 Get Log Page Extended Data: Supported 00:26:56.568 Telemetry Log Pages: Not Supported 00:26:56.568 Persistent Event Log Pages: Not Supported 00:26:56.568 Supported Log Pages Log Page: May Support 00:26:56.568 Commands Supported & Effects Log Page: Not Supported 00:26:56.568 Feature Identifiers & Effects Log Page:May Support 00:26:56.568 NVMe-MI Commands & Effects Log Page: May Support 00:26:56.568 Data Area 4 for Telemetry Log: Not Supported 00:26:56.568 Error Log Page Entries Supported: 1 00:26:56.568 Keep Alive: Not Supported 00:26:56.568 00:26:56.568 NVM Command Set Attributes 00:26:56.568 ========================== 00:26:56.568 Submission Queue Entry Size 00:26:56.568 Max: 1 00:26:56.568 Min: 1 00:26:56.568 Completion Queue Entry Size 00:26:56.568 Max: 1 00:26:56.568 Min: 1 00:26:56.568 Number of Namespaces: 0 00:26:56.568 Compare Command: Not Supported 00:26:56.568 Write Uncorrectable Command: Not Supported 00:26:56.568 Dataset Management Command: Not Supported 00:26:56.568 Write Zeroes Command: Not Supported 00:26:56.568 Set Features Save Field: Not Supported 00:26:56.569 Reservations: Not Supported 00:26:56.569 Timestamp: Not Supported 00:26:56.569 Copy: Not Supported 00:26:56.569 Volatile Write Cache: Not Present 00:26:56.569 Atomic Write Unit (Normal): 1 00:26:56.569 Atomic Write Unit (PFail): 1 00:26:56.569 Atomic Compare & Write Unit: 1 00:26:56.569 Fused Compare & Write: Not Supported 00:26:56.569 Scatter-Gather List 00:26:56.569 SGL Command Set: Supported 00:26:56.569 SGL Keyed: Not Supported 00:26:56.569 SGL Bit Bucket Descriptor: Not Supported 00:26:56.569 SGL Metadata Pointer: Not Supported 00:26:56.569 Oversized SGL: Not Supported 00:26:56.569 SGL Metadata Address: Not Supported 00:26:56.569 SGL Offset: Supported 00:26:56.569 Transport SGL Data Block: Not Supported 00:26:56.569 Replay Protected Memory Block: Not Supported 00:26:56.569 00:26:56.569 Firmware Slot Information 00:26:56.569 ========================= 00:26:56.569 Active slot: 0 00:26:56.569 00:26:56.569 00:26:56.569 Error Log 00:26:56.569 ========= 00:26:56.569 00:26:56.569 Active Namespaces 00:26:56.569 ================= 00:26:56.569 Discovery Log Page 00:26:56.569 ================== 00:26:56.569 Generation Counter: 2 00:26:56.569 Number of Records: 2 00:26:56.569 Record Format: 0 00:26:56.569 00:26:56.569 Discovery Log Entry 0 00:26:56.569 ---------------------- 00:26:56.569 Transport Type: 3 (TCP) 00:26:56.569 Address Family: 1 (IPv4) 00:26:56.569 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:56.569 Entry Flags: 00:26:56.569 Duplicate Returned Information: 0 00:26:56.569 Explicit Persistent Connection Support for Discovery: 0 00:26:56.569 Transport Requirements: 00:26:56.569 Secure Channel: Not Specified 00:26:56.569 Port ID: 1 (0x0001) 00:26:56.569 Controller ID: 65535 (0xffff) 00:26:56.569 Admin Max SQ Size: 32 00:26:56.569 Transport Service Identifier: 4420 00:26:56.569 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:56.569 Transport Address: 10.0.0.1 00:26:56.569 Discovery Log Entry 1 00:26:56.569 ---------------------- 00:26:56.569 Transport Type: 3 (TCP) 00:26:56.569 Address Family: 1 (IPv4) 00:26:56.569 Subsystem Type: 2 (NVM Subsystem) 00:26:56.569 Entry Flags: 00:26:56.569 Duplicate Returned Information: 0 00:26:56.569 Explicit Persistent Connection Support for Discovery: 0 00:26:56.569 Transport Requirements: 00:26:56.569 Secure Channel: Not Specified 00:26:56.569 Port ID: 1 (0x0001) 00:26:56.569 Controller ID: 65535 (0xffff) 00:26:56.569 Admin Max SQ Size: 32 00:26:56.569 Transport Service Identifier: 4420 00:26:56.569 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:56.569 Transport Address: 10.0.0.1 00:26:56.569 09:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:56.569 get_feature(0x01) failed 00:26:56.569 get_feature(0x02) failed 00:26:56.569 get_feature(0x04) failed 00:26:56.569 ===================================================== 00:26:56.569 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:56.569 ===================================================== 00:26:56.569 Controller Capabilities/Features 00:26:56.569 ================================ 00:26:56.569 Vendor ID: 0000 00:26:56.569 Subsystem Vendor ID: 0000 00:26:56.569 Serial Number: b80f6710a524e38f634f 00:26:56.569 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:56.569 Firmware Version: 6.8.9-20 00:26:56.569 Recommended Arb Burst: 6 00:26:56.569 IEEE OUI Identifier: 00 00 00 00:26:56.569 Multi-path I/O 00:26:56.569 May have multiple subsystem ports: Yes 00:26:56.569 May have multiple controllers: Yes 00:26:56.569 Associated with SR-IOV VF: No 00:26:56.569 Max Data Transfer Size: Unlimited 00:26:56.569 Max Number of Namespaces: 1024 00:26:56.569 Max Number of I/O Queues: 128 00:26:56.569 NVMe Specification Version (VS): 1.3 00:26:56.569 NVMe Specification Version (Identify): 1.3 00:26:56.569 Maximum Queue Entries: 1024 00:26:56.569 Contiguous Queues Required: No 00:26:56.569 Arbitration Mechanisms Supported 00:26:56.569 Weighted Round Robin: Not Supported 00:26:56.569 Vendor Specific: Not Supported 00:26:56.569 Reset Timeout: 7500 ms 00:26:56.569 Doorbell Stride: 4 bytes 00:26:56.569 NVM Subsystem Reset: Not Supported 00:26:56.569 Command Sets Supported 00:26:56.569 NVM Command Set: Supported 00:26:56.569 Boot Partition: Not Supported 00:26:56.569 Memory Page Size Minimum: 4096 bytes 00:26:56.569 Memory Page Size Maximum: 4096 bytes 00:26:56.569 Persistent Memory Region: Not Supported 00:26:56.569 Optional Asynchronous Events Supported 00:26:56.569 Namespace Attribute Notices: Supported 00:26:56.569 Firmware Activation Notices: Not Supported 00:26:56.569 ANA Change Notices: Supported 00:26:56.569 PLE Aggregate Log Change Notices: Not Supported 00:26:56.569 LBA Status Info Alert Notices: Not Supported 00:26:56.569 EGE Aggregate Log Change Notices: Not Supported 00:26:56.569 Normal NVM Subsystem Shutdown event: Not Supported 00:26:56.569 Zone Descriptor Change Notices: Not Supported 00:26:56.569 Discovery Log Change Notices: Not Supported 00:26:56.569 Controller Attributes 00:26:56.569 128-bit Host Identifier: Supported 00:26:56.569 Non-Operational Permissive Mode: Not Supported 00:26:56.569 NVM Sets: Not Supported 00:26:56.569 Read Recovery Levels: Not Supported 00:26:56.569 Endurance Groups: Not Supported 00:26:56.569 Predictable Latency Mode: Not Supported 00:26:56.569 Traffic Based Keep ALive: Supported 00:26:56.569 Namespace Granularity: Not Supported 00:26:56.569 SQ Associations: Not Supported 00:26:56.569 UUID List: Not Supported 00:26:56.569 Multi-Domain Subsystem: Not Supported 00:26:56.569 Fixed Capacity Management: Not Supported 00:26:56.569 Variable Capacity Management: Not Supported 00:26:56.569 Delete Endurance Group: Not Supported 00:26:56.569 Delete NVM Set: Not Supported 00:26:56.569 Extended LBA Formats Supported: Not Supported 00:26:56.569 Flexible Data Placement Supported: Not Supported 00:26:56.569 00:26:56.569 Controller Memory Buffer Support 00:26:56.569 ================================ 00:26:56.569 Supported: No 00:26:56.569 00:26:56.569 Persistent Memory Region Support 00:26:56.569 ================================ 00:26:56.569 Supported: No 00:26:56.569 00:26:56.569 Admin Command Set Attributes 00:26:56.569 ============================ 00:26:56.569 Security Send/Receive: Not Supported 00:26:56.569 Format NVM: Not Supported 00:26:56.569 Firmware Activate/Download: Not Supported 00:26:56.569 Namespace Management: Not Supported 00:26:56.569 Device Self-Test: Not Supported 00:26:56.569 Directives: Not Supported 00:26:56.569 NVMe-MI: Not Supported 00:26:56.569 Virtualization Management: Not Supported 00:26:56.569 Doorbell Buffer Config: Not Supported 00:26:56.569 Get LBA Status Capability: Not Supported 00:26:56.569 Command & Feature Lockdown Capability: Not Supported 00:26:56.569 Abort Command Limit: 4 00:26:56.569 Async Event Request Limit: 4 00:26:56.569 Number of Firmware Slots: N/A 00:26:56.569 Firmware Slot 1 Read-Only: N/A 00:26:56.569 Firmware Activation Without Reset: N/A 00:26:56.569 Multiple Update Detection Support: N/A 00:26:56.569 Firmware Update Granularity: No Information Provided 00:26:56.569 Per-Namespace SMART Log: Yes 00:26:56.569 Asymmetric Namespace Access Log Page: Supported 00:26:56.569 ANA Transition Time : 10 sec 00:26:56.569 00:26:56.569 Asymmetric Namespace Access Capabilities 00:26:56.569 ANA Optimized State : Supported 00:26:56.569 ANA Non-Optimized State : Supported 00:26:56.569 ANA Inaccessible State : Supported 00:26:56.569 ANA Persistent Loss State : Supported 00:26:56.569 ANA Change State : Supported 00:26:56.569 ANAGRPID is not changed : No 00:26:56.569 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:56.569 00:26:56.569 ANA Group Identifier Maximum : 128 00:26:56.569 Number of ANA Group Identifiers : 128 00:26:56.569 Max Number of Allowed Namespaces : 1024 00:26:56.569 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:56.569 Command Effects Log Page: Supported 00:26:56.569 Get Log Page Extended Data: Supported 00:26:56.569 Telemetry Log Pages: Not Supported 00:26:56.569 Persistent Event Log Pages: Not Supported 00:26:56.569 Supported Log Pages Log Page: May Support 00:26:56.569 Commands Supported & Effects Log Page: Not Supported 00:26:56.569 Feature Identifiers & Effects Log Page:May Support 00:26:56.569 NVMe-MI Commands & Effects Log Page: May Support 00:26:56.569 Data Area 4 for Telemetry Log: Not Supported 00:26:56.569 Error Log Page Entries Supported: 128 00:26:56.569 Keep Alive: Supported 00:26:56.569 Keep Alive Granularity: 1000 ms 00:26:56.569 00:26:56.569 NVM Command Set Attributes 00:26:56.570 ========================== 00:26:56.570 Submission Queue Entry Size 00:26:56.570 Max: 64 00:26:56.570 Min: 64 00:26:56.570 Completion Queue Entry Size 00:26:56.570 Max: 16 00:26:56.570 Min: 16 00:26:56.570 Number of Namespaces: 1024 00:26:56.570 Compare Command: Not Supported 00:26:56.570 Write Uncorrectable Command: Not Supported 00:26:56.570 Dataset Management Command: Supported 00:26:56.570 Write Zeroes Command: Supported 00:26:56.570 Set Features Save Field: Not Supported 00:26:56.570 Reservations: Not Supported 00:26:56.570 Timestamp: Not Supported 00:26:56.570 Copy: Not Supported 00:26:56.570 Volatile Write Cache: Present 00:26:56.570 Atomic Write Unit (Normal): 1 00:26:56.570 Atomic Write Unit (PFail): 1 00:26:56.570 Atomic Compare & Write Unit: 1 00:26:56.570 Fused Compare & Write: Not Supported 00:26:56.570 Scatter-Gather List 00:26:56.570 SGL Command Set: Supported 00:26:56.570 SGL Keyed: Not Supported 00:26:56.570 SGL Bit Bucket Descriptor: Not Supported 00:26:56.570 SGL Metadata Pointer: Not Supported 00:26:56.570 Oversized SGL: Not Supported 00:26:56.570 SGL Metadata Address: Not Supported 00:26:56.570 SGL Offset: Supported 00:26:56.570 Transport SGL Data Block: Not Supported 00:26:56.570 Replay Protected Memory Block: Not Supported 00:26:56.570 00:26:56.570 Firmware Slot Information 00:26:56.570 ========================= 00:26:56.570 Active slot: 0 00:26:56.570 00:26:56.570 Asymmetric Namespace Access 00:26:56.570 =========================== 00:26:56.570 Change Count : 0 00:26:56.570 Number of ANA Group Descriptors : 1 00:26:56.570 ANA Group Descriptor : 0 00:26:56.570 ANA Group ID : 1 00:26:56.570 Number of NSID Values : 1 00:26:56.570 Change Count : 0 00:26:56.570 ANA State : 1 00:26:56.570 Namespace Identifier : 1 00:26:56.570 00:26:56.570 Commands Supported and Effects 00:26:56.570 ============================== 00:26:56.570 Admin Commands 00:26:56.570 -------------- 00:26:56.570 Get Log Page (02h): Supported 00:26:56.570 Identify (06h): Supported 00:26:56.570 Abort (08h): Supported 00:26:56.570 Set Features (09h): Supported 00:26:56.570 Get Features (0Ah): Supported 00:26:56.570 Asynchronous Event Request (0Ch): Supported 00:26:56.570 Keep Alive (18h): Supported 00:26:56.570 I/O Commands 00:26:56.570 ------------ 00:26:56.570 Flush (00h): Supported 00:26:56.570 Write (01h): Supported LBA-Change 00:26:56.570 Read (02h): Supported 00:26:56.570 Write Zeroes (08h): Supported LBA-Change 00:26:56.570 Dataset Management (09h): Supported 00:26:56.570 00:26:56.570 Error Log 00:26:56.570 ========= 00:26:56.570 Entry: 0 00:26:56.570 Error Count: 0x3 00:26:56.570 Submission Queue Id: 0x0 00:26:56.570 Command Id: 0x5 00:26:56.570 Phase Bit: 0 00:26:56.570 Status Code: 0x2 00:26:56.570 Status Code Type: 0x0 00:26:56.570 Do Not Retry: 1 00:26:56.831 Error Location: 0x28 00:26:56.831 LBA: 0x0 00:26:56.831 Namespace: 0x0 00:26:56.831 Vendor Log Page: 0x0 00:26:56.831 ----------- 00:26:56.831 Entry: 1 00:26:56.831 Error Count: 0x2 00:26:56.831 Submission Queue Id: 0x0 00:26:56.831 Command Id: 0x5 00:26:56.831 Phase Bit: 0 00:26:56.831 Status Code: 0x2 00:26:56.831 Status Code Type: 0x0 00:26:56.831 Do Not Retry: 1 00:26:56.831 Error Location: 0x28 00:26:56.831 LBA: 0x0 00:26:56.831 Namespace: 0x0 00:26:56.831 Vendor Log Page: 0x0 00:26:56.831 ----------- 00:26:56.831 Entry: 2 00:26:56.831 Error Count: 0x1 00:26:56.831 Submission Queue Id: 0x0 00:26:56.831 Command Id: 0x4 00:26:56.831 Phase Bit: 0 00:26:56.831 Status Code: 0x2 00:26:56.831 Status Code Type: 0x0 00:26:56.831 Do Not Retry: 1 00:26:56.831 Error Location: 0x28 00:26:56.831 LBA: 0x0 00:26:56.831 Namespace: 0x0 00:26:56.831 Vendor Log Page: 0x0 00:26:56.831 00:26:56.831 Number of Queues 00:26:56.831 ================ 00:26:56.831 Number of I/O Submission Queues: 128 00:26:56.831 Number of I/O Completion Queues: 128 00:26:56.831 00:26:56.831 ZNS Specific Controller Data 00:26:56.831 ============================ 00:26:56.831 Zone Append Size Limit: 0 00:26:56.831 00:26:56.831 00:26:56.831 Active Namespaces 00:26:56.831 ================= 00:26:56.831 get_feature(0x05) failed 00:26:56.831 Namespace ID:1 00:26:56.831 Command Set Identifier: NVM (00h) 00:26:56.831 Deallocate: Supported 00:26:56.831 Deallocated/Unwritten Error: Not Supported 00:26:56.831 Deallocated Read Value: Unknown 00:26:56.831 Deallocate in Write Zeroes: Not Supported 00:26:56.831 Deallocated Guard Field: 0xFFFF 00:26:56.831 Flush: Supported 00:26:56.831 Reservation: Not Supported 00:26:56.831 Namespace Sharing Capabilities: Multiple Controllers 00:26:56.831 Size (in LBAs): 3750748848 (1788GiB) 00:26:56.831 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:56.831 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:56.831 UUID: a6426f26-29af-4e18-aef7-1aed93411853 00:26:56.831 Thin Provisioning: Not Supported 00:26:56.831 Per-NS Atomic Units: Yes 00:26:56.831 Atomic Write Unit (Normal): 8 00:26:56.831 Atomic Write Unit (PFail): 8 00:26:56.831 Preferred Write Granularity: 8 00:26:56.831 Atomic Compare & Write Unit: 8 00:26:56.831 Atomic Boundary Size (Normal): 0 00:26:56.831 Atomic Boundary Size (PFail): 0 00:26:56.831 Atomic Boundary Offset: 0 00:26:56.831 NGUID/EUI64 Never Reused: No 00:26:56.831 ANA group ID: 1 00:26:56.831 Namespace Write Protected: No 00:26:56.831 Number of LBA Formats: 1 00:26:56.831 Current LBA Format: LBA Format #00 00:26:56.831 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:56.831 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:56.831 rmmod nvme_tcp 00:26:56.831 rmmod nvme_fabrics 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.831 09:12:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.744 09:12:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:58.744 09:12:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:58.744 09:12:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:58.744 09:12:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:58.744 09:12:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:58.744 09:12:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:59.006 09:12:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:59.006 09:12:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:59.006 09:12:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:59.006 09:12:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:59.006 09:12:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:02.307 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:02.307 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:02.307 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:02.567 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:02.567 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:02.567 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:02.567 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:02.568 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:02.568 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:02.568 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:02.568 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:02.568 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:02.568 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:02.568 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:02.568 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:02.568 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:02.568 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:03.138 00:27:03.138 real 0m19.683s 00:27:03.138 user 0m5.303s 00:27:03.138 sys 0m11.368s 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:03.138 ************************************ 00:27:03.138 END TEST nvmf_identify_kernel_target 00:27:03.138 ************************************ 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.138 ************************************ 00:27:03.138 START TEST nvmf_auth_host 00:27:03.138 ************************************ 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:03.138 * Looking for test storage... 00:27:03.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:03.138 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:03.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.400 --rc genhtml_branch_coverage=1 00:27:03.400 --rc genhtml_function_coverage=1 00:27:03.400 --rc genhtml_legend=1 00:27:03.400 --rc geninfo_all_blocks=1 00:27:03.400 --rc geninfo_unexecuted_blocks=1 00:27:03.400 00:27:03.400 ' 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:03.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.400 --rc genhtml_branch_coverage=1 00:27:03.400 --rc genhtml_function_coverage=1 00:27:03.400 --rc genhtml_legend=1 00:27:03.400 --rc geninfo_all_blocks=1 00:27:03.400 --rc geninfo_unexecuted_blocks=1 00:27:03.400 00:27:03.400 ' 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:03.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.400 --rc genhtml_branch_coverage=1 00:27:03.400 --rc genhtml_function_coverage=1 00:27:03.400 --rc genhtml_legend=1 00:27:03.400 --rc geninfo_all_blocks=1 00:27:03.400 --rc geninfo_unexecuted_blocks=1 00:27:03.400 00:27:03.400 ' 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:03.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.400 --rc genhtml_branch_coverage=1 00:27:03.400 --rc genhtml_function_coverage=1 00:27:03.400 --rc genhtml_legend=1 00:27:03.400 --rc geninfo_all_blocks=1 00:27:03.400 --rc geninfo_unexecuted_blocks=1 00:27:03.400 00:27:03.400 ' 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.400 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:03.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:03.401 09:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.543 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:11.544 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:11.544 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:11.544 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:11.544 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.544 09:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:11.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:27:11.544 00:27:11.544 --- 10.0.0.2 ping statistics --- 00:27:11.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.544 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:27:11.544 00:27:11.544 --- 10.0.0.1 ping statistics --- 00:27:11.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.544 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:11.544 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=846998 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 846998 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 846998 ']' 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:11.545 09:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.545 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.545 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:11.545 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:11.545 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:11.545 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3b1e63d09401bf8360a3fd13787d90d6 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.r1e 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3b1e63d09401bf8360a3fd13787d90d6 0 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3b1e63d09401bf8360a3fd13787d90d6 0 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3b1e63d09401bf8360a3fd13787d90d6 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.r1e 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.r1e 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.r1e 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0431e1620500a5afd928bb7ac8675201a28654cb92eb01fe7045927703be8d1f 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Dtz 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0431e1620500a5afd928bb7ac8675201a28654cb92eb01fe7045927703be8d1f 3 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0431e1620500a5afd928bb7ac8675201a28654cb92eb01fe7045927703be8d1f 3 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0431e1620500a5afd928bb7ac8675201a28654cb92eb01fe7045927703be8d1f 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Dtz 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Dtz 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Dtz 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=987e7b964643a896db09ae574cf634b54aedc7ffb14e891e 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XmY 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 987e7b964643a896db09ae574cf634b54aedc7ffb14e891e 0 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 987e7b964643a896db09ae574cf634b54aedc7ffb14e891e 0 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.806 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=987e7b964643a896db09ae574cf634b54aedc7ffb14e891e 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XmY 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XmY 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.XmY 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e4e2bb84a6c74ae26f65ed1e1d770be2ffe2ff2abc0f5733 00:27:11.807 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1V9 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e4e2bb84a6c74ae26f65ed1e1d770be2ffe2ff2abc0f5733 2 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e4e2bb84a6c74ae26f65ed1e1d770be2ffe2ff2abc0f5733 2 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e4e2bb84a6c74ae26f65ed1e1d770be2ffe2ff2abc0f5733 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1V9 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1V9 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1V9 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4aa7fd70c93659a59d08092e0a5d1bb0 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:12.068 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Nb2 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4aa7fd70c93659a59d08092e0a5d1bb0 1 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4aa7fd70c93659a59d08092e0a5d1bb0 1 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4aa7fd70c93659a59d08092e0a5d1bb0 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Nb2 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Nb2 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Nb2 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f2bdf58c5b54ef63611150c974922266 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.uxd 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f2bdf58c5b54ef63611150c974922266 1 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f2bdf58c5b54ef63611150c974922266 1 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f2bdf58c5b54ef63611150c974922266 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.uxd 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.uxd 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uxd 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6ad9b3ff2555c50464a1565f7aca7fbb2222222b86397d69 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.rw6 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6ad9b3ff2555c50464a1565f7aca7fbb2222222b86397d69 2 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6ad9b3ff2555c50464a1565f7aca7fbb2222222b86397d69 2 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6ad9b3ff2555c50464a1565f7aca7fbb2222222b86397d69 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.rw6 00:27:12.069 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.rw6 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.rw6 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c8b2278a3c8d3d617b5815c9febc8022 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mzL 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c8b2278a3c8d3d617b5815c9febc8022 0 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c8b2278a3c8d3d617b5815c9febc8022 0 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c8b2278a3c8d3d617b5815c9febc8022 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mzL 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mzL 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.mzL 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0dd1a5de214c0f21a2ba2da442ec4116f607c0a917791caa1cbf0836d410a572 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4fm 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0dd1a5de214c0f21a2ba2da442ec4116f607c0a917791caa1cbf0836d410a572 3 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0dd1a5de214c0f21a2ba2da442ec4116f607c0a917791caa1cbf0836d410a572 3 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0dd1a5de214c0f21a2ba2da442ec4116f607c0a917791caa1cbf0836d410a572 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4fm 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4fm 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.4fm 00:27:12.330 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:12.331 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 846998 00:27:12.331 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 846998 ']' 00:27:12.331 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.331 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:12.331 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.331 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:12.331 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.r1e 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Dtz ]] 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Dtz 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XmY 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1V9 ]] 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1V9 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Nb2 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uxd ]] 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uxd 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.596 09:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.rw6 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.mzL ]] 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.mzL 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.4fm 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:12.596 09:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:15.894 Waiting for block devices as requested 00:27:16.153 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:16.153 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:16.153 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:16.414 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:16.414 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:16.414 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:16.674 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:16.674 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:16.674 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:16.934 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:16.934 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:16.934 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:17.194 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:17.194 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:17.194 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:17.194 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:17.453 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:18.394 No valid GPT data, bailing 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:18.394 00:27:18.394 Discovery Log Number of Records 2, Generation counter 2 00:27:18.394 =====Discovery Log Entry 0====== 00:27:18.394 trtype: tcp 00:27:18.394 adrfam: ipv4 00:27:18.394 subtype: current discovery subsystem 00:27:18.394 treq: not specified, sq flow control disable supported 00:27:18.394 portid: 1 00:27:18.394 trsvcid: 4420 00:27:18.394 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:18.394 traddr: 10.0.0.1 00:27:18.394 eflags: none 00:27:18.394 sectype: none 00:27:18.394 =====Discovery Log Entry 1====== 00:27:18.394 trtype: tcp 00:27:18.394 adrfam: ipv4 00:27:18.394 subtype: nvme subsystem 00:27:18.394 treq: not specified, sq flow control disable supported 00:27:18.394 portid: 1 00:27:18.394 trsvcid: 4420 00:27:18.394 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:18.394 traddr: 10.0.0.1 00:27:18.394 eflags: none 00:27:18.394 sectype: none 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.394 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.395 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.395 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.395 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.656 nvme0n1 00:27:18.656 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.656 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.656 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.656 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.656 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.656 09:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.656 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.918 nvme0n1 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.918 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.179 nvme0n1 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.179 nvme0n1 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.179 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.440 nvme0n1 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.440 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.701 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.701 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.701 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.702 09:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.702 nvme0n1 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.702 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.963 nvme0n1 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.963 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.224 nvme0n1 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.224 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.225 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.486 nvme0n1 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.486 09:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.486 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.747 nvme0n1 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.747 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.748 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.748 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.748 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.008 nvme0n1 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.008 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.269 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.530 nvme0n1 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.791 nvme0n1 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.791 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.052 nvme0n1 00:27:22.052 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.052 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.052 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.052 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.052 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.052 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.052 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.052 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.052 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.052 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.053 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.313 nvme0n1 00:27:22.313 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.313 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.313 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.313 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.313 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.572 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.572 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.572 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.572 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.572 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.572 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.572 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.572 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:22.572 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.572 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.573 09:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.832 nvme0n1 00:27:22.832 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.832 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.833 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.403 nvme0n1 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:23.403 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.404 09:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.663 nvme0n1 00:27:23.663 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.663 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.663 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.663 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.663 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.663 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.933 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.933 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.933 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.933 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.933 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.933 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.933 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:23.933 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.934 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.194 nvme0n1 00:27:24.194 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.194 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.194 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.194 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.194 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.194 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.194 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.194 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.194 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.194 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.455 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.716 nvme0n1 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.716 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.975 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.975 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.975 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.975 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.235 nvme0n1 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.235 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.328 nvme0n1 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.328 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.635 nvme0n1 00:27:26.635 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.635 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.635 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.635 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.635 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.635 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.897 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.898 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.898 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.898 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.469 nvme0n1 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.469 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.410 nvme0n1 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.410 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.981 nvme0n1 00:27:28.981 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.981 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.981 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.981 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.981 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.981 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.981 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.981 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.981 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.981 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.982 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.244 nvme0n1 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.244 nvme0n1 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.244 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.505 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.506 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.506 nvme0n1 00:27:29.506 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.506 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.506 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.506 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.506 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.506 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:29.766 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.767 nvme0n1 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.767 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.028 nvme0n1 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.028 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.289 nvme0n1 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:30.289 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.290 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.550 nvme0n1 00:27:30.550 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.550 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.550 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.550 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.550 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.550 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.550 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.811 nvme0n1 00:27:30.811 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.811 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.811 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.811 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.811 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.811 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.811 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.811 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.811 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.812 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.074 nvme0n1 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.074 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.336 nvme0n1 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.336 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.597 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.597 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.597 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.597 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.597 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.597 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.597 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.597 nvme0n1 00:27:31.597 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.597 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.597 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.597 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.597 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.859 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.121 nvme0n1 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.121 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.382 nvme0n1 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.382 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.643 nvme0n1 00:27:32.643 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.643 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.643 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.643 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.643 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.643 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.904 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.165 nvme0n1 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.165 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.737 nvme0n1 00:27:33.737 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.737 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.737 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.737 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.737 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.737 09:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.737 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.738 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.998 nvme0n1 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:33.998 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.999 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.259 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.259 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.259 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.259 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.259 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.259 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.259 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.259 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.259 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.259 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 nvme0n1 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.519 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.519 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.091 nvme0n1 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.091 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.662 nvme0n1 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.662 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.663 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:35.663 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:35.663 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:35.663 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:35.663 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.663 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.663 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.234 nvme0n1 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.234 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.235 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.176 nvme0n1 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.176 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.746 nvme0n1 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.746 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.747 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.319 nvme0n1 00:27:38.319 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.319 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.319 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.319 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.319 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.580 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.150 nvme0n1 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.150 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.151 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.412 nvme0n1 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.412 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.672 nvme0n1 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.672 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.932 nvme0n1 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:39.932 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.933 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.193 nvme0n1 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:40.193 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.194 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.454 nvme0n1 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.454 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.455 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.455 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.455 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.715 nvme0n1 00:27:40.715 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.715 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.715 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.715 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.715 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.715 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.975 nvme0n1 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:40.975 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.976 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.236 nvme0n1 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.236 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.237 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.237 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.237 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.497 nvme0n1 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.497 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.758 nvme0n1 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:41.758 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.759 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.018 nvme0n1 00:27:42.018 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.019 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.278 nvme0n1 00:27:42.278 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.278 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.278 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.278 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.278 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.278 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.278 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.278 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.278 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.278 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.538 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.539 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.539 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.539 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.799 nvme0n1 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.799 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.059 nvme0n1 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.059 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.060 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.320 nvme0n1 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.320 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.580 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.840 nvme0n1 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.840 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.841 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.841 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.841 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.841 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.841 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.841 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.841 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.841 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.841 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.410 nvme0n1 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.410 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.980 nvme0n1 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.980 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.981 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.981 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.552 nvme0n1 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.552 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.812 nvme0n1 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IxZTYzZDA5NDAxYmY4MzYwYTNmZDEzNzg3ZDkwZDb49jLx: 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: ]] 00:27:45.812 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQzMWUxNjIwNTAwYTVhZmQ5MjhiYjdhYzg2NzUyMDFhMjg2NTRjYjkyZWIwMWZlNzA0NTkyNzcwM2JlOGQxZpArMSw=: 00:27:45.813 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:45.813 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.813 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.813 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.813 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.813 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.813 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:45.813 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.813 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.073 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.644 nvme0n1 00:27:46.644 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.644 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.644 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.644 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.644 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.644 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.645 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.645 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.645 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.216 nvme0n1 00:27:47.216 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.216 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.216 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.216 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.216 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.216 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.477 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.048 nvme0n1 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOWIzZmYyNTU1YzUwNDY0YTE1NjVmN2FjYTdmYmIyMjIyMjIyYjg2Mzk3ZDY5Z0DNtg==: 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: ]] 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzhiMjI3OGEzYzhkM2Q2MTdiNTgxNWM5ZmViYzgwMjI/6G8Z: 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:48.048 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.049 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.991 nvme0n1 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGRkMWE1ZGUyMTRjMGYyMWEyYmEyZGE0NDJlYzQxMTZmNjA3YzBhOTE3NzkxY2FhMWNiZjA4MzZkNDEwYTU3MqU/Y8U=: 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.991 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.560 nvme0n1 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.560 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.561 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.561 request: 00:27:49.561 { 00:27:49.561 "name": "nvme0", 00:27:49.561 "trtype": "tcp", 00:27:49.561 "traddr": "10.0.0.1", 00:27:49.561 "adrfam": "ipv4", 00:27:49.561 "trsvcid": "4420", 00:27:49.561 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:49.561 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:49.561 "prchk_reftag": false, 00:27:49.561 "prchk_guard": false, 00:27:49.561 "hdgst": false, 00:27:49.561 "ddgst": false, 00:27:49.561 "allow_unrecognized_csi": false, 00:27:49.561 "method": "bdev_nvme_attach_controller", 00:27:49.561 "req_id": 1 00:27:49.561 } 00:27:49.561 Got JSON-RPC error response 00:27:49.561 response: 00:27:49.561 { 00:27:49.561 "code": -5, 00:27:49.561 "message": "Input/output error" 00:27:49.561 } 00:27:49.561 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:49.561 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:49.561 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:49.561 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:49.561 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:49.561 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.561 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:49.561 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.561 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.561 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.822 request: 00:27:49.822 { 00:27:49.822 "name": "nvme0", 00:27:49.822 "trtype": "tcp", 00:27:49.822 "traddr": "10.0.0.1", 00:27:49.822 "adrfam": "ipv4", 00:27:49.822 "trsvcid": "4420", 00:27:49.822 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:49.822 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:49.822 "prchk_reftag": false, 00:27:49.822 "prchk_guard": false, 00:27:49.822 "hdgst": false, 00:27:49.822 "ddgst": false, 00:27:49.822 "dhchap_key": "key2", 00:27:49.822 "allow_unrecognized_csi": false, 00:27:49.822 "method": "bdev_nvme_attach_controller", 00:27:49.822 "req_id": 1 00:27:49.822 } 00:27:49.822 Got JSON-RPC error response 00:27:49.822 response: 00:27:49.822 { 00:27:49.822 "code": -5, 00:27:49.822 "message": "Input/output error" 00:27:49.822 } 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.822 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.823 request: 00:27:49.823 { 00:27:49.823 "name": "nvme0", 00:27:49.823 "trtype": "tcp", 00:27:49.823 "traddr": "10.0.0.1", 00:27:49.823 "adrfam": "ipv4", 00:27:49.823 "trsvcid": "4420", 00:27:49.823 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:49.823 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:49.823 "prchk_reftag": false, 00:27:49.823 "prchk_guard": false, 00:27:49.823 "hdgst": false, 00:27:49.823 "ddgst": false, 00:27:49.823 "dhchap_key": "key1", 00:27:49.823 "dhchap_ctrlr_key": "ckey2", 00:27:49.823 "allow_unrecognized_csi": false, 00:27:49.823 "method": "bdev_nvme_attach_controller", 00:27:49.823 "req_id": 1 00:27:49.823 } 00:27:49.823 Got JSON-RPC error response 00:27:49.823 response: 00:27:49.823 { 00:27:49.823 "code": -5, 00:27:49.823 "message": "Input/output error" 00:27:49.823 } 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.823 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.084 nvme0n1 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.084 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.344 request: 00:27:50.344 { 00:27:50.344 "name": "nvme0", 00:27:50.344 "dhchap_key": "key1", 00:27:50.344 "dhchap_ctrlr_key": "ckey2", 00:27:50.344 "method": "bdev_nvme_set_keys", 00:27:50.344 "req_id": 1 00:27:50.344 } 00:27:50.344 Got JSON-RPC error response 00:27:50.344 response: 00:27:50.344 { 00:27:50.344 "code": -13, 00:27:50.344 "message": "Permission denied" 00:27:50.344 } 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:50.344 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:51.282 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.282 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:51.282 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.282 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.282 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.282 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:51.282 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:52.222 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.222 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:52.222 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.222 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3ZTdiOTY0NjQzYTg5NmRiMDlhZTU3NGNmNjM0YjU0YWVkYzdmZmIxNGU4OTFl4GZkMA==: 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: ]] 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTRlMmJiODRhNmM3NGFlMjZmNjVlZDFlMWQ3NzBiZTJmZmUyZmYyYWJjMGY1NzMzcPsmTQ==: 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.482 nvme0n1 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:52.482 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFhN2ZkNzBjOTM2NTlhNTlkMDgwOTJlMGE1ZDFiYjBMZTMC: 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: ]] 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJiZGY1OGM1YjU0ZWY2MzYxMTE1MGM5NzQ5MjIyNjZ7cIKV: 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.483 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.743 request: 00:27:52.743 { 00:27:52.743 "name": "nvme0", 00:27:52.743 "dhchap_key": "key2", 00:27:52.743 "dhchap_ctrlr_key": "ckey1", 00:27:52.743 "method": "bdev_nvme_set_keys", 00:27:52.743 "req_id": 1 00:27:52.743 } 00:27:52.743 Got JSON-RPC error response 00:27:52.743 response: 00:27:52.743 { 00:27:52.743 "code": -13, 00:27:52.743 "message": "Permission denied" 00:27:52.743 } 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:52.743 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:53.684 rmmod nvme_tcp 00:27:53.684 rmmod nvme_fabrics 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 846998 ']' 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 846998 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 846998 ']' 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 846998 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.684 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 846998 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 846998' 00:27:53.944 killing process with pid 846998 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 846998 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 846998 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.944 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:56.489 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:59.791 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:59.791 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:00.363 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.r1e /tmp/spdk.key-null.XmY /tmp/spdk.key-sha256.Nb2 /tmp/spdk.key-sha384.rw6 /tmp/spdk.key-sha512.4fm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:00.363 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:03.671 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:03.671 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:03.671 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:03.932 00:28:03.932 real 1m0.948s 00:28:03.932 user 0m54.649s 00:28:03.932 sys 0m16.209s 00:28:03.932 09:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.932 09:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.932 ************************************ 00:28:03.932 END TEST nvmf_auth_host 00:28:03.932 ************************************ 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.192 ************************************ 00:28:04.192 START TEST nvmf_digest 00:28:04.192 ************************************ 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:04.192 * Looking for test storage... 00:28:04.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:04.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.192 --rc genhtml_branch_coverage=1 00:28:04.192 --rc genhtml_function_coverage=1 00:28:04.192 --rc genhtml_legend=1 00:28:04.192 --rc geninfo_all_blocks=1 00:28:04.192 --rc geninfo_unexecuted_blocks=1 00:28:04.192 00:28:04.192 ' 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:04.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.192 --rc genhtml_branch_coverage=1 00:28:04.192 --rc genhtml_function_coverage=1 00:28:04.192 --rc genhtml_legend=1 00:28:04.192 --rc geninfo_all_blocks=1 00:28:04.192 --rc geninfo_unexecuted_blocks=1 00:28:04.192 00:28:04.192 ' 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:04.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.192 --rc genhtml_branch_coverage=1 00:28:04.192 --rc genhtml_function_coverage=1 00:28:04.192 --rc genhtml_legend=1 00:28:04.192 --rc geninfo_all_blocks=1 00:28:04.192 --rc geninfo_unexecuted_blocks=1 00:28:04.192 00:28:04.192 ' 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:04.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.192 --rc genhtml_branch_coverage=1 00:28:04.192 --rc genhtml_function_coverage=1 00:28:04.192 --rc genhtml_legend=1 00:28:04.192 --rc geninfo_all_blocks=1 00:28:04.192 --rc geninfo_unexecuted_blocks=1 00:28:04.192 00:28:04.192 ' 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:04.192 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:04.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:04.454 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:12.597 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.597 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:12.598 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:12.598 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:12.598 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.598 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:28:12.598 00:28:12.598 --- 10.0.0.2 ping statistics --- 00:28:12.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.598 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:28:12.598 00:28:12.598 --- 10.0.0.1 ping statistics --- 00:28:12.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.598 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:12.598 ************************************ 00:28:12.598 START TEST nvmf_digest_clean 00:28:12.598 ************************************ 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=864007 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 864007 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 864007 ']' 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.598 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.598 [2024-11-20 09:13:37.356619] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:12.598 [2024-11-20 09:13:37.356682] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.598 [2024-11-20 09:13:37.456567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.598 [2024-11-20 09:13:37.509176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.598 [2024-11-20 09:13:37.509224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.599 [2024-11-20 09:13:37.509234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.599 [2024-11-20 09:13:37.509241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.599 [2024-11-20 09:13:37.509249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.599 [2024-11-20 09:13:37.510029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.859 null0 00:28:12.859 [2024-11-20 09:13:38.304295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.859 [2024-11-20 09:13:38.328620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:12.859 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:12.860 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:12.860 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=864347 00:28:12.860 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 864347 /var/tmp/bperf.sock 00:28:12.860 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 864347 ']' 00:28:12.860 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:12.860 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:12.860 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.860 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:12.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:12.860 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.860 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:13.119 [2024-11-20 09:13:38.388973] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:13.120 [2024-11-20 09:13:38.389041] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864347 ] 00:28:13.120 [2024-11-20 09:13:38.481885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.120 [2024-11-20 09:13:38.534037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.691 09:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.691 09:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:13.691 09:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:13.691 09:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:13.691 09:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:13.953 09:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.953 09:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.525 nvme0n1 00:28:14.525 09:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:14.525 09:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.525 Running I/O for 2 seconds... 00:28:16.479 18140.00 IOPS, 70.86 MiB/s [2024-11-20T08:13:42.008Z] 19783.50 IOPS, 77.28 MiB/s 00:28:16.479 Latency(us) 00:28:16.479 [2024-11-20T08:13:42.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.479 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:16.479 nvme0n1 : 2.01 19795.34 77.33 0.00 0.00 6457.44 2826.24 22609.92 00:28:16.479 [2024-11-20T08:13:42.008Z] =================================================================================================================== 00:28:16.479 [2024-11-20T08:13:42.008Z] Total : 19795.34 77.33 0.00 0.00 6457.44 2826.24 22609.92 00:28:16.739 { 00:28:16.739 "results": [ 00:28:16.739 { 00:28:16.739 "job": "nvme0n1", 00:28:16.739 "core_mask": "0x2", 00:28:16.739 "workload": "randread", 00:28:16.739 "status": "finished", 00:28:16.739 "queue_depth": 128, 00:28:16.739 "io_size": 4096, 00:28:16.739 "runtime": 2.00527, 00:28:16.739 "iops": 19795.33928099458, 00:28:16.739 "mibps": 77.32554406638508, 00:28:16.739 "io_failed": 0, 00:28:16.739 "io_timeout": 0, 00:28:16.739 "avg_latency_us": 6457.444177184364, 00:28:16.739 "min_latency_us": 2826.24, 00:28:16.739 "max_latency_us": 22609.92 00:28:16.739 } 00:28:16.739 ], 00:28:16.739 "core_count": 1 00:28:16.739 } 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:16.739 | select(.opcode=="crc32c") 00:28:16.739 | "\(.module_name) \(.executed)"' 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 864347 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 864347 ']' 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 864347 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.739 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 864347 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 864347' 00:28:16.999 killing process with pid 864347 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 864347 00:28:16.999 Received shutdown signal, test time was about 2.000000 seconds 00:28:16.999 00:28:16.999 Latency(us) 00:28:16.999 [2024-11-20T08:13:42.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.999 [2024-11-20T08:13:42.528Z] =================================================================================================================== 00:28:16.999 [2024-11-20T08:13:42.528Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 864347 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=865033 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 865033 /var/tmp/bperf.sock 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 865033 ']' 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:16.999 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:17.000 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.000 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:17.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:17.000 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.000 09:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:17.000 [2024-11-20 09:13:42.436362] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:17.000 [2024-11-20 09:13:42.436415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865033 ] 00:28:17.000 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:17.000 Zero copy mechanism will not be used. 00:28:17.000 [2024-11-20 09:13:42.519152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.260 [2024-11-20 09:13:42.547156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.831 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.831 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:17.831 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:17.831 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:17.831 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:18.091 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.091 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.351 nvme0n1 00:28:18.351 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:18.351 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.351 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:18.351 Zero copy mechanism will not be used. 00:28:18.351 Running I/O for 2 seconds... 00:28:20.673 4266.00 IOPS, 533.25 MiB/s [2024-11-20T08:13:46.202Z] 3849.00 IOPS, 481.12 MiB/s 00:28:20.673 Latency(us) 00:28:20.673 [2024-11-20T08:13:46.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.673 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:20.673 nvme0n1 : 2.05 3770.53 471.32 0.00 0.00 4159.81 771.41 47841.28 00:28:20.673 [2024-11-20T08:13:46.202Z] =================================================================================================================== 00:28:20.673 [2024-11-20T08:13:46.202Z] Total : 3770.53 471.32 0.00 0.00 4159.81 771.41 47841.28 00:28:20.673 { 00:28:20.673 "results": [ 00:28:20.673 { 00:28:20.673 "job": "nvme0n1", 00:28:20.673 "core_mask": "0x2", 00:28:20.673 "workload": "randread", 00:28:20.673 "status": "finished", 00:28:20.673 "queue_depth": 16, 00:28:20.673 "io_size": 131072, 00:28:20.673 "runtime": 2.045865, 00:28:20.673 "iops": 3770.53226874696, 00:28:20.673 "mibps": 471.31653359337, 00:28:20.673 "io_failed": 0, 00:28:20.673 "io_timeout": 0, 00:28:20.673 "avg_latency_us": 4159.8053063693715, 00:28:20.673 "min_latency_us": 771.4133333333333, 00:28:20.673 "max_latency_us": 47841.28 00:28:20.673 } 00:28:20.673 ], 00:28:20.673 "core_count": 1 00:28:20.673 } 00:28:20.673 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:20.673 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:20.673 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:20.673 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:20.673 | select(.opcode=="crc32c") 00:28:20.673 | "\(.module_name) \(.executed)"' 00:28:20.673 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 865033 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 865033 ']' 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 865033 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 865033 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 865033' 00:28:20.673 killing process with pid 865033 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 865033 00:28:20.673 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.673 00:28:20.673 Latency(us) 00:28:20.673 [2024-11-20T08:13:46.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.673 [2024-11-20T08:13:46.202Z] =================================================================================================================== 00:28:20.673 [2024-11-20T08:13:46.202Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.673 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 865033 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=865737 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 865737 /var/tmp/bperf.sock 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 865737 ']' 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.933 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:20.933 [2024-11-20 09:13:46.290616] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:20.933 [2024-11-20 09:13:46.290673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865737 ] 00:28:20.933 [2024-11-20 09:13:46.371863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.933 [2024-11-20 09:13:46.401451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.872 09:13:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.872 09:13:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:21.872 09:13:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:21.872 09:13:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:21.873 09:13:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:21.873 09:13:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.873 09:13:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.442 nvme0n1 00:28:22.442 09:13:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:22.442 09:13:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:22.442 Running I/O for 2 seconds... 00:28:24.322 29478.00 IOPS, 115.15 MiB/s [2024-11-20T08:13:49.851Z] 29583.00 IOPS, 115.56 MiB/s 00:28:24.322 Latency(us) 00:28:24.322 [2024-11-20T08:13:49.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.322 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:24.322 nvme0n1 : 2.01 29586.82 115.57 0.00 0.00 4319.29 2048.00 11359.57 00:28:24.322 [2024-11-20T08:13:49.851Z] =================================================================================================================== 00:28:24.322 [2024-11-20T08:13:49.851Z] Total : 29586.82 115.57 0.00 0.00 4319.29 2048.00 11359.57 00:28:24.322 { 00:28:24.322 "results": [ 00:28:24.322 { 00:28:24.322 "job": "nvme0n1", 00:28:24.322 "core_mask": "0x2", 00:28:24.322 "workload": "randwrite", 00:28:24.322 "status": "finished", 00:28:24.322 "queue_depth": 128, 00:28:24.322 "io_size": 4096, 00:28:24.322 "runtime": 2.00542, 00:28:24.322 "iops": 29586.819718562696, 00:28:24.322 "mibps": 115.57351452563553, 00:28:24.322 "io_failed": 0, 00:28:24.322 "io_timeout": 0, 00:28:24.322 "avg_latency_us": 4319.291621442456, 00:28:24.322 "min_latency_us": 2048.0, 00:28:24.322 "max_latency_us": 11359.573333333334 00:28:24.322 } 00:28:24.322 ], 00:28:24.322 "core_count": 1 00:28:24.322 } 00:28:24.322 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:24.322 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:24.322 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:24.322 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:24.322 | select(.opcode=="crc32c") 00:28:24.322 | "\(.module_name) \(.executed)"' 00:28:24.322 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 865737 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 865737 ']' 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 865737 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 865737 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 865737' 00:28:24.583 killing process with pid 865737 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 865737 00:28:24.583 Received shutdown signal, test time was about 2.000000 seconds 00:28:24.583 00:28:24.583 Latency(us) 00:28:24.583 [2024-11-20T08:13:50.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.583 [2024-11-20T08:13:50.112Z] =================================================================================================================== 00:28:24.583 [2024-11-20T08:13:50.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.583 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 865737 00:28:24.843 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:24.843 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:24.843 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:24.843 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:24.843 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:24.843 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:24.844 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:24.844 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=866585 00:28:24.844 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 866585 /var/tmp/bperf.sock 00:28:24.844 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 866585 ']' 00:28:24.844 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:24.844 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:24.844 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.844 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:24.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:24.844 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.844 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.844 [2024-11-20 09:13:50.235917] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:24.844 [2024-11-20 09:13:50.235977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866585 ] 00:28:24.844 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:24.844 Zero copy mechanism will not be used. 00:28:24.844 [2024-11-20 09:13:50.317735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.844 [2024-11-20 09:13:50.347259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.784 09:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.784 09:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:25.784 09:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:25.784 09:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:25.784 09:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:25.784 09:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.784 09:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.355 nvme0n1 00:28:26.355 09:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:26.355 09:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.355 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.355 Zero copy mechanism will not be used. 00:28:26.355 Running I/O for 2 seconds... 00:28:28.239 4936.00 IOPS, 617.00 MiB/s [2024-11-20T08:13:53.768Z] 4259.50 IOPS, 532.44 MiB/s 00:28:28.239 Latency(us) 00:28:28.239 [2024-11-20T08:13:53.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.239 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:28.239 nvme0n1 : 2.01 4254.84 531.85 0.00 0.00 3753.22 1256.11 6826.67 00:28:28.239 [2024-11-20T08:13:53.768Z] =================================================================================================================== 00:28:28.239 [2024-11-20T08:13:53.768Z] Total : 4254.84 531.85 0.00 0.00 3753.22 1256.11 6826.67 00:28:28.239 { 00:28:28.239 "results": [ 00:28:28.239 { 00:28:28.239 "job": "nvme0n1", 00:28:28.239 "core_mask": "0x2", 00:28:28.239 "workload": "randwrite", 00:28:28.239 "status": "finished", 00:28:28.239 "queue_depth": 16, 00:28:28.239 "io_size": 131072, 00:28:28.239 "runtime": 2.005951, 00:28:28.239 "iops": 4254.839724400048, 00:28:28.239 "mibps": 531.854965550006, 00:28:28.239 "io_failed": 0, 00:28:28.239 "io_timeout": 0, 00:28:28.239 "avg_latency_us": 3753.21854950205, 00:28:28.239 "min_latency_us": 1256.1066666666666, 00:28:28.239 "max_latency_us": 6826.666666666667 00:28:28.239 } 00:28:28.239 ], 00:28:28.239 "core_count": 1 00:28:28.239 } 00:28:28.239 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:28.239 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:28.239 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:28.239 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:28.239 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:28.239 | select(.opcode=="crc32c") 00:28:28.239 | "\(.module_name) \(.executed)"' 00:28:28.500 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:28.500 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:28.500 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:28.500 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:28.501 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 866585 00:28:28.501 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 866585 ']' 00:28:28.501 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 866585 00:28:28.501 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:28.501 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.501 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 866585 00:28:28.501 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:28.501 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:28.501 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 866585' 00:28:28.501 killing process with pid 866585 00:28:28.501 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 866585 00:28:28.501 Received shutdown signal, test time was about 2.000000 seconds 00:28:28.501 00:28:28.501 Latency(us) 00:28:28.501 [2024-11-20T08:13:54.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.501 [2024-11-20T08:13:54.030Z] =================================================================================================================== 00:28:28.501 [2024-11-20T08:13:54.030Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.501 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 866585 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 864007 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 864007 ']' 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 864007 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 864007 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 864007' 00:28:28.762 killing process with pid 864007 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 864007 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 864007 00:28:28.762 00:28:28.762 real 0m16.959s 00:28:28.762 user 0m33.580s 00:28:28.762 sys 0m3.722s 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:28.762 ************************************ 00:28:28.762 END TEST nvmf_digest_clean 00:28:28.762 ************************************ 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.762 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.023 ************************************ 00:28:29.023 START TEST nvmf_digest_error 00:28:29.023 ************************************ 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=867428 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 867428 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 867428 ']' 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.023 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.023 [2024-11-20 09:13:54.379009] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:29.023 [2024-11-20 09:13:54.379064] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.023 [2024-11-20 09:13:54.472040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.023 [2024-11-20 09:13:54.505871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.023 [2024-11-20 09:13:54.505901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.023 [2024-11-20 09:13:54.505907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.023 [2024-11-20 09:13:54.505912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.023 [2024-11-20 09:13:54.505916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.023 [2024-11-20 09:13:54.506406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.965 [2024-11-20 09:13:55.216359] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.965 null0 00:28:29.965 [2024-11-20 09:13:55.294164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.965 [2024-11-20 09:13:55.318366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=867585 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 867585 /var/tmp/bperf.sock 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 867585 ']' 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.965 09:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.965 [2024-11-20 09:13:55.375030] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:29.965 [2024-11-20 09:13:55.375077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867585 ] 00:28:29.965 [2024-11-20 09:13:55.457809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.965 [2024-11-20 09:13:55.487739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.907 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.907 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:30.907 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.907 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.907 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:30.907 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.907 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.907 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.907 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.907 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.167 nvme0n1 00:28:31.167 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:31.168 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.168 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.168 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.168 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:31.168 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:31.429 Running I/O for 2 seconds... 00:28:31.429 [2024-11-20 09:13:56.792540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.792571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.792582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.802658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.802678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.802686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.812597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.812615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.812622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.822068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.822086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.822093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.832914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.832932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.832939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.840942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.840959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.840966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.851212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.851229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.851236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.861203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.861220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.861227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.870264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.870285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.870292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.877912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.877929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.877936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.887117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.887134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.887141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.896237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.896254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.896261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.905226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.905244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.905251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.914618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.914635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.914641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.923527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.923544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.923551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.931612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.931629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.931636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.940792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.940808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.940814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.429 [2024-11-20 09:13:56.949948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.429 [2024-11-20 09:13:56.949965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.429 [2024-11-20 09:13:56.949972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.691 [2024-11-20 09:13:56.958882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.691 [2024-11-20 09:13:56.958900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.691 [2024-11-20 09:13:56.958906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.691 [2024-11-20 09:13:56.967345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.691 [2024-11-20 09:13:56.967363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.691 [2024-11-20 09:13:56.967369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.691 [2024-11-20 09:13:56.976852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.691 [2024-11-20 09:13:56.976870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.691 [2024-11-20 09:13:56.976876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.691 [2024-11-20 09:13:56.985913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.691 [2024-11-20 09:13:56.985930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.691 [2024-11-20 09:13:56.985936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.691 [2024-11-20 09:13:56.994067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.691 [2024-11-20 09:13:56.994083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.691 [2024-11-20 09:13:56.994090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.691 [2024-11-20 09:13:57.002364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.002382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.002388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.012128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.012146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.012152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.021473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.021490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.021500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.029682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.029699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.029705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.039022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.039039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.039045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.047923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.047940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.047946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.056545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.056562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.056569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.065658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.065675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.065681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.074931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.074948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.074954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.082743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.082761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.082768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.092473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.092491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.092498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.100424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.100442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.100449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.109409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.109426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.109432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.119514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.119532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.119538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.129530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.129547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.129553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.138009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.138026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.138032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.146229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.146245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.146251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.155174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.155192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.155199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.163266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.163283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.163290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.172713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.172730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.172739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.181985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.182002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.182009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.190309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.190326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.190332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.199783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.199800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.199806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.692 [2024-11-20 09:13:57.208400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.692 [2024-11-20 09:13:57.208417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.692 [2024-11-20 09:13:57.208423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.954 [2024-11-20 09:13:57.217564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.954 [2024-11-20 09:13:57.217581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.954 [2024-11-20 09:13:57.217587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.954 [2024-11-20 09:13:57.226028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.954 [2024-11-20 09:13:57.226046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.954 [2024-11-20 09:13:57.226053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.954 [2024-11-20 09:13:57.234937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.954 [2024-11-20 09:13:57.234954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.954 [2024-11-20 09:13:57.234961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.954 [2024-11-20 09:13:57.243960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.954 [2024-11-20 09:13:57.243977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.954 [2024-11-20 09:13:57.243984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.954 [2024-11-20 09:13:57.252938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.954 [2024-11-20 09:13:57.252959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.954 [2024-11-20 09:13:57.252965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.954 [2024-11-20 09:13:57.262030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.954 [2024-11-20 09:13:57.262047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.954 [2024-11-20 09:13:57.262054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.954 [2024-11-20 09:13:57.271002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.954 [2024-11-20 09:13:57.271018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.954 [2024-11-20 09:13:57.271025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.954 [2024-11-20 09:13:57.279529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.954 [2024-11-20 09:13:57.279546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.954 [2024-11-20 09:13:57.279553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.954 [2024-11-20 09:13:57.288788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.954 [2024-11-20 09:13:57.288805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.288811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.297376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.297393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.297400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.305651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.305669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.305675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.315141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.315163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.315170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.323790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.323807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.323814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.332645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.332663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.332669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.341427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.341444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.341451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.350462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.350478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.350485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.359610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.359627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.359633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.368015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.368032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.368038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.376897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.376914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.376921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.385424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.385441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.385447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.394866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.394883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.394889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.404410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.404427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.404437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.413071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.413088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.413095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.421867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.421885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.421891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.430139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.430157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.430169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.439383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.439400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.439406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.448571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.448588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.448594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.457593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.457610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.457616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.465787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.465804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.465812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.955 [2024-11-20 09:13:57.474865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:31.955 [2024-11-20 09:13:57.474882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.955 [2024-11-20 09:13:57.474888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.482996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.483017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.483023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.495657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.495675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.495681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.507844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.507861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.507867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.516976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.516993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.517000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.524982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.524999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.525006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.535038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.535055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.535061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.542935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.542952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.542958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.552725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.552743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.552749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.561016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.561032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.561039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.569940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.569957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.569963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.579628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.579645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.579652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.588372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.588389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.588396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.598279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.598296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.598303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.608873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.608890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.608896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.617104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.617121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.617127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.625929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.625947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.625954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.637095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.637112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.637119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.647123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.647139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.647148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.655650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.655667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.655673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.665662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.665679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.665685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.224 [2024-11-20 09:13:57.676095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.224 [2024-11-20 09:13:57.676112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.224 [2024-11-20 09:13:57.676119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.225 [2024-11-20 09:13:57.685374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.225 [2024-11-20 09:13:57.685391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.225 [2024-11-20 09:13:57.685397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.225 [2024-11-20 09:13:57.694419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.225 [2024-11-20 09:13:57.694436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.225 [2024-11-20 09:13:57.694443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.225 [2024-11-20 09:13:57.702973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.225 [2024-11-20 09:13:57.702989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.225 [2024-11-20 09:13:57.702996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.225 [2024-11-20 09:13:57.711564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.225 [2024-11-20 09:13:57.711581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.225 [2024-11-20 09:13:57.711588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.225 [2024-11-20 09:13:57.719924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.225 [2024-11-20 09:13:57.719941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.225 [2024-11-20 09:13:57.719947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.225 [2024-11-20 09:13:57.728951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.225 [2024-11-20 09:13:57.728971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.225 [2024-11-20 09:13:57.728977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.225 [2024-11-20 09:13:57.737940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.225 [2024-11-20 09:13:57.737957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.225 [2024-11-20 09:13:57.737963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.225 [2024-11-20 09:13:57.746071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.225 [2024-11-20 09:13:57.746088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.225 [2024-11-20 09:13:57.746094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.755239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.755257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.755263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.766771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.766788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.766795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 27721.00 IOPS, 108.29 MiB/s [2024-11-20T08:13:58.028Z] [2024-11-20 09:13:57.775326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.775343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.775349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.785704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.785721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.785728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.796685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.796703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.796709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.804938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.804954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.804964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.813349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.813365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.813371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.822747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.822764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.822771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.831129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.831146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.831153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.840655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.840671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.840678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.848916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.848933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.848939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.857575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.857591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.857597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.866865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.866882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.866889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.876019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.876036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.876042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.885003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.885023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.885030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.895371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.895388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.895394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.903387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.903403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.903410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.913170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.913187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.913194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.921545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.921562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.921568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.931285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.931302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.931308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.940003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.940020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.940027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.499 [2024-11-20 09:13:57.948811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.499 [2024-11-20 09:13:57.948827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.499 [2024-11-20 09:13:57.948833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.500 [2024-11-20 09:13:57.958132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.500 [2024-11-20 09:13:57.958149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.500 [2024-11-20 09:13:57.958155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.500 [2024-11-20 09:13:57.967201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.500 [2024-11-20 09:13:57.967218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.500 [2024-11-20 09:13:57.967224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.500 [2024-11-20 09:13:57.976427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.500 [2024-11-20 09:13:57.976444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.500 [2024-11-20 09:13:57.976451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.500 [2024-11-20 09:13:57.988142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.500 [2024-11-20 09:13:57.988163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.500 [2024-11-20 09:13:57.988170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.500 [2024-11-20 09:13:57.996193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.500 [2024-11-20 09:13:57.996210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.500 [2024-11-20 09:13:57.996216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.500 [2024-11-20 09:13:58.004608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.500 [2024-11-20 09:13:58.004624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.500 [2024-11-20 09:13:58.004631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.500 [2024-11-20 09:13:58.013598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.500 [2024-11-20 09:13:58.013615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.500 [2024-11-20 09:13:58.013621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.023365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.023382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.023389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.031950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.031967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.031973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.042122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.042139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.042149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.050241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.050258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.050265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.058846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.058863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.058869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.067613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.067629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.067635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.076608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.076626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.076632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.085843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.085860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.085867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.094157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.094177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.094183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.105256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.105276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.105283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.114672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.114689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.114695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.122053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.122073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.122080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.131573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.131589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.131595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.141031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.141048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.141054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.149014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.149030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.149037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.158076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.158093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.158100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.166555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.166572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.166578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.175771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.175787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.175794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.184604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.184621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.184627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.193980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.193997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.194003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.201899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.201917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.201923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.212111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.212127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.212134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.221462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.221479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.221485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.230533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.230550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.230557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.239365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.239382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.239388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.248101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.248118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.248124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.256877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.256893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.256900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.265489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.265506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.265513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.274816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.274833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.274843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.284052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.284069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.284075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.775 [2024-11-20 09:13:58.292179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:32.775 [2024-11-20 09:13:58.292196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.775 [2024-11-20 09:13:58.292203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.301341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.301358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.301365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.310653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.310670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.310676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.319103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.319120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.319127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.327212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.327228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.327235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.336443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.336459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.336466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.345914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.345932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.345938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.355506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.355526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.355532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.363846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.363863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.363869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.372746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.372762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.372768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.382046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.382062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.382069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.391205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.391223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.391229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.400252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.400269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.400275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.409758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.409775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.409781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.418041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.418058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.418064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.426251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.426267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.426277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.434743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.434760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.434766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.443966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.443983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.443989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.453908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.453925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.453931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.465737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.465754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.465761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.475474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.475490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.475497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.484539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.484556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.484563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.492186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.492203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.492209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.502021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.502038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.041 [2024-11-20 09:13:58.502045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.041 [2024-11-20 09:13:58.514134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.041 [2024-11-20 09:13:58.514154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.042 [2024-11-20 09:13:58.514165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.042 [2024-11-20 09:13:58.523722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.042 [2024-11-20 09:13:58.523738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.042 [2024-11-20 09:13:58.523744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.042 [2024-11-20 09:13:58.531645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.042 [2024-11-20 09:13:58.531662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.042 [2024-11-20 09:13:58.531671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.042 [2024-11-20 09:13:58.540942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.042 [2024-11-20 09:13:58.540960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.042 [2024-11-20 09:13:58.540966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.042 [2024-11-20 09:13:58.552012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.042 [2024-11-20 09:13:58.552030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.042 [2024-11-20 09:13:58.552037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.042 [2024-11-20 09:13:58.559806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.042 [2024-11-20 09:13:58.559823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.042 [2024-11-20 09:13:58.559829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.569567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.569585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.569592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.579524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.579541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.579548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.591218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.591235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.591242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.600460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.600477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.600483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.609281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.609297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.609303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.617446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.617464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.617470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.626534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.626551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.626557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.635834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.635851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.635857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.647232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.647249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.647255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.657979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.657996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.658003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.666228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.666245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.666251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.675923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.675940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.675950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.685351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.685368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.685375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.693939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.693957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.693963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.702119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.702137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.702145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.710490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.710507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.710514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.719868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.719885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.719891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.729359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.729376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.319 [2024-11-20 09:13:58.729383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.319 [2024-11-20 09:13:58.737864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.319 [2024-11-20 09:13:58.737881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.320 [2024-11-20 09:13:58.737887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.320 [2024-11-20 09:13:58.746697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.320 [2024-11-20 09:13:58.746714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.320 [2024-11-20 09:13:58.746721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.320 [2024-11-20 09:13:58.755205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.320 [2024-11-20 09:13:58.755225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.320 [2024-11-20 09:13:58.755232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.320 [2024-11-20 09:13:58.764070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.320 [2024-11-20 09:13:58.764088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.320 [2024-11-20 09:13:58.764094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.320 [2024-11-20 09:13:58.773667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff75c0) 00:28:33.320 [2024-11-20 09:13:58.773684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.320 [2024-11-20 09:13:58.773691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.320 27819.50 IOPS, 108.67 MiB/s 00:28:33.320 Latency(us) 00:28:33.320 [2024-11-20T08:13:58.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.320 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:33.320 nvme0n1 : 2.00 27831.04 108.71 0.00 0.00 4594.64 2198.19 19551.57 00:28:33.320 [2024-11-20T08:13:58.849Z] =================================================================================================================== 00:28:33.320 [2024-11-20T08:13:58.849Z] Total : 27831.04 108.71 0.00 0.00 4594.64 2198.19 19551.57 00:28:33.320 { 00:28:33.320 "results": [ 00:28:33.320 { 00:28:33.320 "job": "nvme0n1", 00:28:33.320 "core_mask": "0x2", 00:28:33.320 "workload": "randread", 00:28:33.320 "status": "finished", 00:28:33.320 "queue_depth": 128, 00:28:33.320 "io_size": 4096, 00:28:33.320 "runtime": 2.00377, 00:28:33.320 "iops": 27831.03849244175, 00:28:33.320 "mibps": 108.71499411110058, 00:28:33.320 "io_failed": 0, 00:28:33.320 "io_timeout": 0, 00:28:33.320 "avg_latency_us": 4594.640950143753, 00:28:33.320 "min_latency_us": 2198.1866666666665, 00:28:33.320 "max_latency_us": 19551.573333333334 00:28:33.320 } 00:28:33.320 ], 00:28:33.320 "core_count": 1 00:28:33.320 } 00:28:33.320 09:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:33.320 09:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:33.320 09:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:33.320 | .driver_specific 00:28:33.320 | .nvme_error 00:28:33.320 | .status_code 00:28:33.320 | .command_transient_transport_error' 00:28:33.320 09:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:33.580 09:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:28:33.580 09:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 867585 00:28:33.580 09:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 867585 ']' 00:28:33.580 09:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 867585 00:28:33.580 09:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:33.580 09:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.580 09:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 867585 00:28:33.580 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:33.580 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:33.580 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 867585' 00:28:33.580 killing process with pid 867585 00:28:33.580 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 867585 00:28:33.580 Received shutdown signal, test time was about 2.000000 seconds 00:28:33.580 00:28:33.580 Latency(us) 00:28:33.580 [2024-11-20T08:13:59.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.580 [2024-11-20T08:13:59.109Z] =================================================================================================================== 00:28:33.580 [2024-11-20T08:13:59.109Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.580 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 867585 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=868423 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 868423 /var/tmp/bperf.sock 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 868423 ']' 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.840 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.840 [2024-11-20 09:13:59.196373] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:33.840 [2024-11-20 09:13:59.196430] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868423 ] 00:28:33.840 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.840 Zero copy mechanism will not be used. 00:28:33.840 [2024-11-20 09:13:59.279132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.840 [2024-11-20 09:13:59.308491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.779 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.779 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:34.779 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.779 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.779 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:34.779 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.779 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.779 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.779 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.779 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:35.040 nvme0n1 00:28:35.040 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:35.040 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.040 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.040 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.040 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:35.040 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:35.301 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.301 Zero copy mechanism will not be used. 00:28:35.301 Running I/O for 2 seconds... 00:28:35.301 [2024-11-20 09:14:00.646444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.646476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.646485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.656100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.656123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.656131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.666628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.666649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.666656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.677302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.677321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.677328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.687857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.687877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.687884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.699242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.699262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.699268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.709879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.709898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.709905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.719376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.719395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.719401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.727600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.727619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.727626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.739175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.739194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.739200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.750225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.750243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.750250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.760719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.760737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.760744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.766262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.766280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.766286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.777463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.777482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.777492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.787507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.787526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.301 [2024-11-20 09:14:00.787532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.301 [2024-11-20 09:14:00.796838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.301 [2024-11-20 09:14:00.796856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.302 [2024-11-20 09:14:00.796863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.302 [2024-11-20 09:14:00.807358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.302 [2024-11-20 09:14:00.807378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.302 [2024-11-20 09:14:00.807384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.302 [2024-11-20 09:14:00.818317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.302 [2024-11-20 09:14:00.818336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.302 [2024-11-20 09:14:00.818342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.562 [2024-11-20 09:14:00.830389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.830408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.830415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.842575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.842594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.842600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.854895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.854914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.854921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.866382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.866400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.866406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.877905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.877924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.877931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.889521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.889540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.889547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.901213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.901233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.901239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.909585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.909604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.909611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.916956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.916975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.916981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.926965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.926985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.926992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.937049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.937068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.937074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.946955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.946973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.946980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.957077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.957096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.957106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.967630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.967649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.967655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.978764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.978783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.978789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.989529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.989548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.989554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:00.999964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:00.999983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:00.999989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:01.009880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:01.009898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:01.009905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:01.021659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:01.021678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:01.021685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:01.034061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:01.034079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:01.034086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:01.044935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:01.044954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:01.044960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:01.056042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:01.056063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:01.056070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:01.066596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:01.066615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:01.066621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:01.078170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:01.078188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:01.078194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.563 [2024-11-20 09:14:01.088199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.563 [2024-11-20 09:14:01.088217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.563 [2024-11-20 09:14:01.088223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.100037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.100056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.100063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.111056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.111074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.111080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.122626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.122645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.122651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.132596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.132614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.132621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.142224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.142242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.142248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.154046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.154064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.154071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.164019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.164038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.164044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.173974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.173992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.173999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.181979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.181998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.182004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.192669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.192688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.192695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.204164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.204183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.204190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.215967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.847 [2024-11-20 09:14:01.215986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.847 [2024-11-20 09:14:01.215992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.847 [2024-11-20 09:14:01.227705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.227724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.227731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.239039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.239057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.239067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.250854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.250872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.250879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.262573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.262591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.262598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.273796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.273815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.273821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.283491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.283508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.283515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.293879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.293898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.293904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.303314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.303332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.303339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.313093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.313111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.313118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.320810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.320829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.320835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.331428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.331447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.331453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.342732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.342750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.342757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.355558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.355576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.355583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.848 [2024-11-20 09:14:01.368371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:35.848 [2024-11-20 09:14:01.368390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.848 [2024-11-20 09:14:01.368396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.108 [2024-11-20 09:14:01.380328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.108 [2024-11-20 09:14:01.380348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.108 [2024-11-20 09:14:01.380355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.108 [2024-11-20 09:14:01.391606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.108 [2024-11-20 09:14:01.391624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.108 [2024-11-20 09:14:01.391631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.108 [2024-11-20 09:14:01.402873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.108 [2024-11-20 09:14:01.402891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.108 [2024-11-20 09:14:01.402898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.108 [2024-11-20 09:14:01.411881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.108 [2024-11-20 09:14:01.411900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.108 [2024-11-20 09:14:01.411906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.108 [2024-11-20 09:14:01.421600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.108 [2024-11-20 09:14:01.421619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.421629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.429902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.429920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.429927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.439068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.439086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.439093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.449676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.449693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.449700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.457734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.457753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.457760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.468770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.468789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.468795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.480118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.480136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.480143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.489934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.489953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.489959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.500833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.500851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.500857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.509519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.509540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.509547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.521042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.521060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.521066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.532450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.532468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.532475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.540871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.540890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.540897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.551115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.551133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.551140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.562151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.562174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.562180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.573333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.573351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.573358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.582299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.582318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.582326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.591406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.591425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.591431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.597851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.597868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.597875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.608118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.608137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.608143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.619844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.619862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.619868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.109 [2024-11-20 09:14:01.630164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.109 [2024-11-20 09:14:01.630183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.109 [2024-11-20 09:14:01.630189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.642525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.642544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.642552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.370 2945.00 IOPS, 368.12 MiB/s [2024-11-20T08:14:01.899Z] [2024-11-20 09:14:01.652034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.652053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.652059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.662914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.662933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.662939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.674960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.674978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.674984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.687050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.687068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.687078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.693491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.693508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.693514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.702533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.702552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.702559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.713960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.713979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.713985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.724786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.724805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.724811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.735372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.735390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.735397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.745608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.745625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.745632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.755428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.755447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.755453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.766749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.766767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.766773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.776518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.776536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.776543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.786736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.786754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.786760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.798882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.798901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.798907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.808207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.808224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.808231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.815197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.815215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.815221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.826592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.826611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.826617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.836817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.836835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.836842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.848764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.848782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.848788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.860319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.860338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.860347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.872392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.872411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.872417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.884708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.884727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.370 [2024-11-20 09:14:01.884733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.370 [2024-11-20 09:14:01.895495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.370 [2024-11-20 09:14:01.895513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.371 [2024-11-20 09:14:01.895519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.632 [2024-11-20 09:14:01.906969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.632 [2024-11-20 09:14:01.906987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.632 [2024-11-20 09:14:01.906994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.632 [2024-11-20 09:14:01.917952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.632 [2024-11-20 09:14:01.917970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.632 [2024-11-20 09:14:01.917977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.632 [2024-11-20 09:14:01.928088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.632 [2024-11-20 09:14:01.928106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:01.928112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:01.938907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:01.938925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:01.938932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:01.949920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:01.949938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:01.949944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:01.960381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:01.960406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:01.960413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:01.971172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:01.971190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:01.971197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:01.983186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:01.983205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:01.983211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:01.994259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:01.994277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:01.994284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.005300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.005318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.005325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.013471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.013489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.013495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.025462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.025481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.025487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.034647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.034665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.034671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.042026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.042044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.042050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.053202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.053220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.053226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.064430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.064449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.064455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.075602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.075620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.075627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.084385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.084403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.084409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.095705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.095725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.095731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.103547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.103565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.103572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.108939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.108957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.108964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.116129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.116148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.116155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.123640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.123658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.123669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.131297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.131315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.131322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.141793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.141812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.141819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.633 [2024-11-20 09:14:02.152932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.633 [2024-11-20 09:14:02.152951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.633 [2024-11-20 09:14:02.152958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.163077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.163095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.163101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.173837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.173856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.173863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.183400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.183419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.183426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.192367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.192386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.192393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.202275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.202295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.202301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.214414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.214437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.214444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.225490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.225510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.225516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.237250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.237269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.237275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.246729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.246747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.246753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.256233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.256251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.256258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.267386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.267405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.267411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.279155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.279178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.895 [2024-11-20 09:14:02.279184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.895 [2024-11-20 09:14:02.291015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.895 [2024-11-20 09:14:02.291034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.291040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.302277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.302296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.302302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.313796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.313815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.313821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.324017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.324035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.324041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.334193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.334212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.334218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.343074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.343093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.343099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.352126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.352145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.352152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.362272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.362291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.362298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.371020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.371039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.371046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.381829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.381848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.381854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.392906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.392924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.392934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.403074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.403092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.403099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.896 [2024-11-20 09:14:02.413491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:36.896 [2024-11-20 09:14:02.413510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.896 [2024-11-20 09:14:02.413517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.158 [2024-11-20 09:14:02.424595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.158 [2024-11-20 09:14:02.424615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.158 [2024-11-20 09:14:02.424621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.158 [2024-11-20 09:14:02.434641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.158 [2024-11-20 09:14:02.434660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.158 [2024-11-20 09:14:02.434667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.158 [2024-11-20 09:14:02.440775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.158 [2024-11-20 09:14:02.440794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.158 [2024-11-20 09:14:02.440800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.158 [2024-11-20 09:14:02.451154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.158 [2024-11-20 09:14:02.451176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.158 [2024-11-20 09:14:02.451182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.158 [2024-11-20 09:14:02.461240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.158 [2024-11-20 09:14:02.461259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.158 [2024-11-20 09:14:02.461265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.158 [2024-11-20 09:14:02.470745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.158 [2024-11-20 09:14:02.470764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.158 [2024-11-20 09:14:02.470770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.158 [2024-11-20 09:14:02.480765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.158 [2024-11-20 09:14:02.480784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.158 [2024-11-20 09:14:02.480790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.490718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.490736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.490742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.502170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.502189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.502195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.512366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.512385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.512391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.524348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.524367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.524374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.535855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.535875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.535881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.544167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.544185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.544192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.555303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.555322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.555328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.564259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.564277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.564287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.575459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.575478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.575485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.587690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.587709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.587715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.594965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.594983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.594990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.606083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.606103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.606109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.617143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.617168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.617175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.627748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.627767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.627773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.159 [2024-11-20 09:14:02.635614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.635632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.635639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.159 2991.50 IOPS, 373.94 MiB/s [2024-11-20T08:14:02.688Z] [2024-11-20 09:14:02.646985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24fba10) 00:28:37.159 [2024-11-20 09:14:02.647005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.159 [2024-11-20 09:14:02.647011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.159 00:28:37.159 Latency(us) 00:28:37.159 [2024-11-20T08:14:02.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.159 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:37.159 nvme0n1 : 2.00 2993.31 374.16 0.00 0.00 5341.44 600.75 12724.91 00:28:37.159 [2024-11-20T08:14:02.688Z] =================================================================================================================== 00:28:37.159 [2024-11-20T08:14:02.688Z] Total : 2993.31 374.16 0.00 0.00 5341.44 600.75 12724.91 00:28:37.159 { 00:28:37.159 "results": [ 00:28:37.159 { 00:28:37.159 "job": "nvme0n1", 00:28:37.159 "core_mask": "0x2", 00:28:37.159 "workload": "randread", 00:28:37.159 "status": "finished", 00:28:37.159 "queue_depth": 16, 00:28:37.159 "io_size": 131072, 00:28:37.159 "runtime": 2.004134, 00:28:37.159 "iops": 2993.312822396107, 00:28:37.159 "mibps": 374.1641027995134, 00:28:37.159 "io_failed": 0, 00:28:37.159 "io_timeout": 0, 00:28:37.159 "avg_latency_us": 5341.444480746791, 00:28:37.159 "min_latency_us": 600.7466666666667, 00:28:37.159 "max_latency_us": 12724.906666666666 00:28:37.159 } 00:28:37.159 ], 00:28:37.159 "core_count": 1 00:28:37.159 } 00:28:37.159 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:37.159 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:37.159 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:37.159 | .driver_specific 00:28:37.159 | .nvme_error 00:28:37.159 | .status_code 00:28:37.159 | .command_transient_transport_error' 00:28:37.159 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 868423 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 868423 ']' 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 868423 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 868423 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 868423' 00:28:37.420 killing process with pid 868423 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 868423 00:28:37.420 Received shutdown signal, test time was about 2.000000 seconds 00:28:37.420 00:28:37.420 Latency(us) 00:28:37.420 [2024-11-20T08:14:02.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.420 [2024-11-20T08:14:02.949Z] =================================================================================================================== 00:28:37.420 [2024-11-20T08:14:02.949Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:37.420 09:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 868423 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=869154 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 869154 /var/tmp/bperf.sock 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 869154 ']' 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:37.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.684 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.684 [2024-11-20 09:14:03.064306] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:37.684 [2024-11-20 09:14:03.064361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869154 ] 00:28:37.684 [2024-11-20 09:14:03.150087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.684 [2024-11-20 09:14:03.178408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.624 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.624 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:38.624 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:38.625 09:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:38.625 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:38.625 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.625 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.625 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.625 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.625 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.197 nvme0n1 00:28:39.197 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:39.197 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.197 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.197 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.197 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:39.197 09:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:39.197 Running I/O for 2 seconds... 00:28:39.197 [2024-11-20 09:14:04.575934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e8d30 00:28:39.197 [2024-11-20 09:14:04.577070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.197 [2024-11-20 09:14:04.577100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.197 [2024-11-20 09:14:04.584700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e8088 00:28:39.197 [2024-11-20 09:14:04.585818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.197 [2024-11-20 09:14:04.585837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:39.197 [2024-11-20 09:14:04.593390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0788 00:28:39.197 [2024-11-20 09:14:04.594508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.197 [2024-11-20 09:14:04.594526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.197 [2024-11-20 09:14:04.601943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166dfdc0 00:28:39.198 [2024-11-20 09:14:04.603067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.603083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.610451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f35f0 00:28:39.198 [2024-11-20 09:14:04.611573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.611590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.618954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e73e0 00:28:39.198 [2024-11-20 09:14:04.620081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.620098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.627634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ec408 00:28:39.198 [2024-11-20 09:14:04.628761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.628777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.636109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0788 00:28:39.198 [2024-11-20 09:14:04.637222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.637239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.644604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166dfdc0 00:28:39.198 [2024-11-20 09:14:04.645727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.645743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.653102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f35f0 00:28:39.198 [2024-11-20 09:14:04.654227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.654244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.661573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e73e0 00:28:39.198 [2024-11-20 09:14:04.662686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.662702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.670057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ec408 00:28:39.198 [2024-11-20 09:14:04.671128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.671145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.678521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0788 00:28:39.198 [2024-11-20 09:14:04.679636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.679653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.686988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166dfdc0 00:28:39.198 [2024-11-20 09:14:04.688119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.688135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.695476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f35f0 00:28:39.198 [2024-11-20 09:14:04.696593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.696609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.704019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e73e0 00:28:39.198 [2024-11-20 09:14:04.705130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.705147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.712482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ec408 00:28:39.198 [2024-11-20 09:14:04.713561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.713577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.198 [2024-11-20 09:14:04.720942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0788 00:28:39.198 [2024-11-20 09:14:04.722058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.198 [2024-11-20 09:14:04.722074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.460 [2024-11-20 09:14:04.729388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166dfdc0 00:28:39.460 [2024-11-20 09:14:04.730507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.460 [2024-11-20 09:14:04.730523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.460 [2024-11-20 09:14:04.737873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f35f0 00:28:39.460 [2024-11-20 09:14:04.739001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.460 [2024-11-20 09:14:04.739018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.460 [2024-11-20 09:14:04.746338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e73e0 00:28:39.460 [2024-11-20 09:14:04.747457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.460 [2024-11-20 09:14:04.747473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.460 [2024-11-20 09:14:04.754805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ec408 00:28:39.460 [2024-11-20 09:14:04.755910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.755925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.763271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0788 00:28:39.461 [2024-11-20 09:14:04.764364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.764380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.771710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166dfdc0 00:28:39.461 [2024-11-20 09:14:04.772842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.772858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.780192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f35f0 00:28:39.461 [2024-11-20 09:14:04.781278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.781293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.788622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fe2e8 00:28:39.461 [2024-11-20 09:14:04.789734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.789753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.795731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e5658 00:28:39.461 [2024-11-20 09:14:04.796405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.796420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.804180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ed920 00:28:39.461 [2024-11-20 09:14:04.804850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.804866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.812611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166eee38 00:28:39.461 [2024-11-20 09:14:04.813275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.813291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.821031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f31b8 00:28:39.461 [2024-11-20 09:14:04.821704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.821720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.829488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f7970 00:28:39.461 [2024-11-20 09:14:04.830138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.830153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.837957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fe720 00:28:39.461 [2024-11-20 09:14:04.838592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.838608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.846418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e73e0 00:28:39.461 [2024-11-20 09:14:04.847087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.847103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.854858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0350 00:28:39.461 [2024-11-20 09:14:04.855535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.855550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.863291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f46d0 00:28:39.461 [2024-11-20 09:14:04.863955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.863971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.871728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f6458 00:28:39.461 [2024-11-20 09:14:04.872396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.872412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.880188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f96f8 00:28:39.461 [2024-11-20 09:14:04.880870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.880885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.888632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e5658 00:28:39.461 [2024-11-20 09:14:04.889318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.889334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.897117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ed920 00:28:39.461 [2024-11-20 09:14:04.897786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.897801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.905555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166eee38 00:28:39.461 [2024-11-20 09:14:04.906217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.906233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.913980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f31b8 00:28:39.461 [2024-11-20 09:14:04.914670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.914685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.922457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f7970 00:28:39.461 [2024-11-20 09:14:04.923128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.923143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.930892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fe720 00:28:39.461 [2024-11-20 09:14:04.931540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.931556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.939364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e73e0 00:28:39.461 [2024-11-20 09:14:04.940024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.940039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.947799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0350 00:28:39.461 [2024-11-20 09:14:04.948431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.948447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.956212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f46d0 00:28:39.461 [2024-11-20 09:14:04.956884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.956899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.964653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f6458 00:28:39.461 [2024-11-20 09:14:04.965286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.965301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.973100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f96f8 00:28:39.461 [2024-11-20 09:14:04.973746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.461 [2024-11-20 09:14:04.973762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.461 [2024-11-20 09:14:04.981535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e5658 00:28:39.462 [2024-11-20 09:14:04.982206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.462 [2024-11-20 09:14:04.982222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.724 [2024-11-20 09:14:04.989984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ed920 00:28:39.724 [2024-11-20 09:14:04.990657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.724 [2024-11-20 09:14:04.990673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.724 [2024-11-20 09:14:04.998409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166eee38 00:28:39.724 [2024-11-20 09:14:04.999070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.724 [2024-11-20 09:14:04.999086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.724 [2024-11-20 09:14:05.006828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f31b8 00:28:39.724 [2024-11-20 09:14:05.007496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.724 [2024-11-20 09:14:05.007515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.724 [2024-11-20 09:14:05.015289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f7970 00:28:39.724 [2024-11-20 09:14:05.015955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.724 [2024-11-20 09:14:05.015970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.724 [2024-11-20 09:14:05.023701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fe720 00:28:39.724 [2024-11-20 09:14:05.024376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.724 [2024-11-20 09:14:05.024392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.724 [2024-11-20 09:14:05.032143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f6cc8 00:28:39.724 [2024-11-20 09:14:05.032799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.724 [2024-11-20 09:14:05.032814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.724 [2024-11-20 09:14:05.040567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1868 00:28:39.724 [2024-11-20 09:14:05.041226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.041242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.048994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0788 00:28:39.725 [2024-11-20 09:14:05.049661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.049677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.057454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f57b0 00:28:39.725 [2024-11-20 09:14:05.058089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.058105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.065921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f2948 00:28:39.725 [2024-11-20 09:14:05.066564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.066580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.074356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f6020 00:28:39.725 [2024-11-20 09:14:05.075022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.075038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.083092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f81e0 00:28:39.725 [2024-11-20 09:14:05.083880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.083898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.091802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fa7d8 00:28:39.725 [2024-11-20 09:14:05.092490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.092507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.100232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f81e0 00:28:39.725 [2024-11-20 09:14:05.100923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.100939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.108684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fa7d8 00:28:39.725 [2024-11-20 09:14:05.109380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.109397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.117141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f81e0 00:28:39.725 [2024-11-20 09:14:05.117846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.117863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.125609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fa7d8 00:28:39.725 [2024-11-20 09:14:05.126301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.126317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.134064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f81e0 00:28:39.725 [2024-11-20 09:14:05.134703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.134719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.142488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fa7d8 00:28:39.725 [2024-11-20 09:14:05.143025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.143041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.151308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:39.725 [2024-11-20 09:14:05.152225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.152241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.159768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:39.725 [2024-11-20 09:14:05.160686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.160702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.168256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:39.725 [2024-11-20 09:14:05.169170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.169186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.176682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:39.725 [2024-11-20 09:14:05.177560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.177575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.185138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:39.725 [2024-11-20 09:14:05.186065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.186081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.193636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:39.725 [2024-11-20 09:14:05.194551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.194567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.202095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:39.725 [2024-11-20 09:14:05.202993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.203009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.210549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:39.725 [2024-11-20 09:14:05.211428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.725 [2024-11-20 09:14:05.211443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.725 [2024-11-20 09:14:05.219008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:39.725 [2024-11-20 09:14:05.219925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.726 [2024-11-20 09:14:05.219941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.726 [2024-11-20 09:14:05.227455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:39.726 [2024-11-20 09:14:05.228358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.726 [2024-11-20 09:14:05.228374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.726 [2024-11-20 09:14:05.235880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:39.726 [2024-11-20 09:14:05.236799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.726 [2024-11-20 09:14:05.236815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.726 [2024-11-20 09:14:05.244350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:39.726 [2024-11-20 09:14:05.245264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.726 [2024-11-20 09:14:05.245280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.987 [2024-11-20 09:14:05.252840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:39.987 [2024-11-20 09:14:05.253724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.987 [2024-11-20 09:14:05.253740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.987 [2024-11-20 09:14:05.261279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:39.987 [2024-11-20 09:14:05.262195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.987 [2024-11-20 09:14:05.262210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.987 [2024-11-20 09:14:05.269734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:39.987 [2024-11-20 09:14:05.270615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.987 [2024-11-20 09:14:05.270630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.987 [2024-11-20 09:14:05.278145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:39.987 [2024-11-20 09:14:05.279066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.987 [2024-11-20 09:14:05.279082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.987 [2024-11-20 09:14:05.286633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:39.987 [2024-11-20 09:14:05.287558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.987 [2024-11-20 09:14:05.287574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.987 [2024-11-20 09:14:05.295125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:39.987 [2024-11-20 09:14:05.296049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.987 [2024-11-20 09:14:05.296064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.987 [2024-11-20 09:14:05.303562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:39.987 [2024-11-20 09:14:05.304497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.987 [2024-11-20 09:14:05.304515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.987 [2024-11-20 09:14:05.312042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:39.988 [2024-11-20 09:14:05.312959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.312975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.320515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:39.988 [2024-11-20 09:14:05.321444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.321459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.328932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:39.988 [2024-11-20 09:14:05.329820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.329835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.337458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:39.988 [2024-11-20 09:14:05.338369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.338385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.345895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:39.988 [2024-11-20 09:14:05.346784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.346799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.354336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:39.988 [2024-11-20 09:14:05.355239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.355254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.362783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:39.988 [2024-11-20 09:14:05.363661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.363676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.371207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:39.988 [2024-11-20 09:14:05.372143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.372161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.379656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:39.988 [2024-11-20 09:14:05.380565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.380581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.388218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:39.988 [2024-11-20 09:14:05.389129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.389145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.396684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:39.988 [2024-11-20 09:14:05.397607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.397623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.405204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:39.988 [2024-11-20 09:14:05.406121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.406136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.413685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:39.988 [2024-11-20 09:14:05.414609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.414624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.422172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:39.988 [2024-11-20 09:14:05.423085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.423100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.430649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:39.988 [2024-11-20 09:14:05.431570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.431586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.439090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:39.988 [2024-11-20 09:14:05.440013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.440029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.447548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:39.988 [2024-11-20 09:14:05.448485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.448500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.456012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:39.988 [2024-11-20 09:14:05.456931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.456946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.464423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:39.988 [2024-11-20 09:14:05.465357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.465372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.472924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:39.988 [2024-11-20 09:14:05.473847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.473863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.481371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:39.988 [2024-11-20 09:14:05.482281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.482297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.489844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:39.988 [2024-11-20 09:14:05.490773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.490789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.498322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:39.988 [2024-11-20 09:14:05.499219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.499235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.988 [2024-11-20 09:14:05.506766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:39.988 [2024-11-20 09:14:05.507693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.988 [2024-11-20 09:14:05.507708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.515238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:40.250 [2024-11-20 09:14:05.516148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.516166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.523689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:40.250 [2024-11-20 09:14:05.524612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.524630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.532186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:40.250 [2024-11-20 09:14:05.533101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.533117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.540648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebb98 00:28:40.250 [2024-11-20 09:14:05.541580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.541595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.549102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1ca0 00:28:40.250 [2024-11-20 09:14:05.550024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.550040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.557526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fdeb0 00:28:40.250 [2024-11-20 09:14:05.558443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.558459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.250 29856.00 IOPS, 116.62 MiB/s [2024-11-20T08:14:05.779Z] [2024-11-20 09:14:05.566140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ee190 00:28:40.250 [2024-11-20 09:14:05.566919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.566935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.574636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ed4e8 00:28:40.250 [2024-11-20 09:14:05.575425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.575440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.583118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fe720 00:28:40.250 [2024-11-20 09:14:05.583911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.583927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.591595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ee190 00:28:40.250 [2024-11-20 09:14:05.592397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.592413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.600037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ed4e8 00:28:40.250 [2024-11-20 09:14:05.600852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.600868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.608498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fe720 00:28:40.250 [2024-11-20 09:14:05.609305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.609321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.616980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ee190 00:28:40.250 [2024-11-20 09:14:05.617797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.617813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.625602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ed4e8 00:28:40.250 [2024-11-20 09:14:05.626395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.626413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.634063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fe720 00:28:40.250 [2024-11-20 09:14:05.634859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.634876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.642518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ee190 00:28:40.250 [2024-11-20 09:14:05.643330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.643346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.650957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ed4e8 00:28:40.250 [2024-11-20 09:14:05.651766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.651782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.659492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fe720 00:28:40.250 [2024-11-20 09:14:05.660291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.660307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.667956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ee190 00:28:40.250 [2024-11-20 09:14:05.668790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.668807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.676810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ed4e8 00:28:40.250 [2024-11-20 09:14:05.677901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.677917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.685174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f8e88 00:28:40.250 [2024-11-20 09:14:05.686193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.686208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.693611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc560 00:28:40.250 [2024-11-20 09:14:05.694658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.250 [2024-11-20 09:14:05.694674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.250 [2024-11-20 09:14:05.702077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166de470 00:28:40.250 [2024-11-20 09:14:05.703107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.251 [2024-11-20 09:14:05.703123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.251 [2024-11-20 09:14:05.710538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fbcf0 00:28:40.251 [2024-11-20 09:14:05.711571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.251 [2024-11-20 09:14:05.711586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.251 [2024-11-20 09:14:05.718976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e27f0 00:28:40.251 [2024-11-20 09:14:05.719961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.251 [2024-11-20 09:14:05.719977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.251 [2024-11-20 09:14:05.727422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e84c0 00:28:40.251 [2024-11-20 09:14:05.728414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.251 [2024-11-20 09:14:05.728430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.251 [2024-11-20 09:14:05.735846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f3a28 00:28:40.251 [2024-11-20 09:14:05.736868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.251 [2024-11-20 09:14:05.736884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.251 [2024-11-20 09:14:05.744278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ef6a8 00:28:40.251 [2024-11-20 09:14:05.745300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.251 [2024-11-20 09:14:05.745318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.251 [2024-11-20 09:14:05.752743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e38d0 00:28:40.251 [2024-11-20 09:14:05.753787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.251 [2024-11-20 09:14:05.753803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.251 [2024-11-20 09:14:05.761194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ea248 00:28:40.251 [2024-11-20 09:14:05.762238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.251 [2024-11-20 09:14:05.762253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.251 [2024-11-20 09:14:05.769644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ed4e8 00:28:40.251 [2024-11-20 09:14:05.770674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.251 [2024-11-20 09:14:05.770690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.778081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f8e88 00:28:40.512 [2024-11-20 09:14:05.779111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.779126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.786541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc560 00:28:40.512 [2024-11-20 09:14:05.787570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.787585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.795016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166de470 00:28:40.512 [2024-11-20 09:14:05.796026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.796041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.803505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fbcf0 00:28:40.512 [2024-11-20 09:14:05.804549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.804564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.811951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e27f0 00:28:40.512 [2024-11-20 09:14:05.812988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.813003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.820399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e84c0 00:28:40.512 [2024-11-20 09:14:05.821418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.821434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.828832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f3a28 00:28:40.512 [2024-11-20 09:14:05.829859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.829875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.837280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ef6a8 00:28:40.512 [2024-11-20 09:14:05.838300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.838316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.845745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e38d0 00:28:40.512 [2024-11-20 09:14:05.846774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.846789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.853395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f4f40 00:28:40.512 [2024-11-20 09:14:05.854692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.854707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.861247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e0630 00:28:40.512 [2024-11-20 09:14:05.861901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.861916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.869710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fc998 00:28:40.512 [2024-11-20 09:14:05.870360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.870376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.878137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f4f40 00:28:40.512 [2024-11-20 09:14:05.878791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.878807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.886737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f5378 00:28:40.512 [2024-11-20 09:14:05.887374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.512 [2024-11-20 09:14:05.887390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.512 [2024-11-20 09:14:05.895198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0ff8 00:28:40.512 [2024-11-20 09:14:05.895855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.895871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.903644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f7970 00:28:40.513 [2024-11-20 09:14:05.904303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.904319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.912077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ef6a8 00:28:40.513 [2024-11-20 09:14:05.912755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.912772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.920582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ecc78 00:28:40.513 [2024-11-20 09:14:05.921223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.921239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.929018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e0630 00:28:40.513 [2024-11-20 09:14:05.929684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.929700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.937493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f5be8 00:28:40.513 [2024-11-20 09:14:05.938171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.938187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.945937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e73e0 00:28:40.513 [2024-11-20 09:14:05.946606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.946622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.954395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e01f8 00:28:40.513 [2024-11-20 09:14:05.955068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.955084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.962829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f57b0 00:28:40.513 [2024-11-20 09:14:05.963499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.963521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.971266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e9e10 00:28:40.513 [2024-11-20 09:14:05.971941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.971957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.979721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0788 00:28:40.513 [2024-11-20 09:14:05.980369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.980385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.988195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fd208 00:28:40.513 [2024-11-20 09:14:05.988859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.988875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:05.996644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f31b8 00:28:40.513 [2024-11-20 09:14:05.997314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:05.997330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:06.005095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e6738 00:28:40.513 [2024-11-20 09:14:06.005752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:06.005768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:06.013531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f4298 00:28:40.513 [2024-11-20 09:14:06.014194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:06.014210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:06.021979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166eee38 00:28:40.513 [2024-11-20 09:14:06.022642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:06.022658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.513 [2024-11-20 09:14:06.030435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebfd0 00:28:40.513 [2024-11-20 09:14:06.031106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.513 [2024-11-20 09:14:06.031122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.038897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f7da8 00:28:40.774 [2024-11-20 09:14:06.039585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.039600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.047351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e88f8 00:28:40.774 [2024-11-20 09:14:06.048028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.048044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.055806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166df988 00:28:40.774 [2024-11-20 09:14:06.056475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.056491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.064272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e1b48 00:28:40.774 [2024-11-20 09:14:06.064939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.064954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.072733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f6cc8 00:28:40.774 [2024-11-20 09:14:06.073394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.073410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.081190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e6fa8 00:28:40.774 [2024-11-20 09:14:06.081854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.081870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.089669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1868 00:28:40.774 [2024-11-20 09:14:06.090323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.090340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.098108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f6890 00:28:40.774 [2024-11-20 09:14:06.098776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.098792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.106546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fcdd0 00:28:40.774 [2024-11-20 09:14:06.107170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.107185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.115000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0350 00:28:40.774 [2024-11-20 09:14:06.115672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.115688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.123460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e0a68 00:28:40.774 [2024-11-20 09:14:06.124133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.124149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.131907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f8a50 00:28:40.774 [2024-11-20 09:14:06.132573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.774 [2024-11-20 09:14:06.132589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.774 [2024-11-20 09:14:06.140356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e7c50 00:28:40.774 [2024-11-20 09:14:06.140992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.141009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.148788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f2d80 00:28:40.775 [2024-11-20 09:14:06.149415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.149432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.157249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f4f40 00:28:40.775 [2024-11-20 09:14:06.157917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.157933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.165713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f5378 00:28:40.775 [2024-11-20 09:14:06.166355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.166371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.174175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0ff8 00:28:40.775 [2024-11-20 09:14:06.174851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.174867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.182618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f7970 00:28:40.775 [2024-11-20 09:14:06.183281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.183300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.191067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ef6a8 00:28:40.775 [2024-11-20 09:14:06.191724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.191740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.199499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ecc78 00:28:40.775 [2024-11-20 09:14:06.200155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.200174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.207947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e0630 00:28:40.775 [2024-11-20 09:14:06.208614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.208630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.216416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f5be8 00:28:40.775 [2024-11-20 09:14:06.217079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.217095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.224881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e73e0 00:28:40.775 [2024-11-20 09:14:06.225553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.225568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.233331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e01f8 00:28:40.775 [2024-11-20 09:14:06.234012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.234028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.241785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f57b0 00:28:40.775 [2024-11-20 09:14:06.242457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.242473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.250219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e9e10 00:28:40.775 [2024-11-20 09:14:06.250881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.250897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.258682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0788 00:28:40.775 [2024-11-20 09:14:06.259376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.259392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.267126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fd208 00:28:40.775 [2024-11-20 09:14:06.267789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.267805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.275590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f31b8 00:28:40.775 [2024-11-20 09:14:06.276240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.276256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.284036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e6738 00:28:40.775 [2024-11-20 09:14:06.284715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.284732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.775 [2024-11-20 09:14:06.292478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f4298 00:28:40.775 [2024-11-20 09:14:06.293145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.775 [2024-11-20 09:14:06.293164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.300925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166eee38 00:28:41.037 [2024-11-20 09:14:06.301558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.301575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.309384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ebfd0 00:28:41.037 [2024-11-20 09:14:06.310043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.310059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.317825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f7da8 00:28:41.037 [2024-11-20 09:14:06.318484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.318500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.326272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e88f8 00:28:41.037 [2024-11-20 09:14:06.326942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.326958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.334700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166df988 00:28:41.037 [2024-11-20 09:14:06.335363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.335379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.343132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e1b48 00:28:41.037 [2024-11-20 09:14:06.343796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.343812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.351585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f6cc8 00:28:41.037 [2024-11-20 09:14:06.352226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.352242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.360043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e6fa8 00:28:41.037 [2024-11-20 09:14:06.360724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.360740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.368492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f1868 00:28:41.037 [2024-11-20 09:14:06.369165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.369181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.376920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f6890 00:28:41.037 [2024-11-20 09:14:06.377557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.377573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.385341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fcdd0 00:28:41.037 [2024-11-20 09:14:06.385998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.386014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.393869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0350 00:28:41.037 [2024-11-20 09:14:06.394545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.394561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.402311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e0a68 00:28:41.037 [2024-11-20 09:14:06.402968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.402986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.410759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f8a50 00:28:41.037 [2024-11-20 09:14:06.411425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.411441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.419197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e7c50 00:28:41.037 [2024-11-20 09:14:06.419860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.419875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.427615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f2d80 00:28:41.037 [2024-11-20 09:14:06.428279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.428295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.436038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f4f40 00:28:41.037 [2024-11-20 09:14:06.436703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.436719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.444512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f5378 00:28:41.037 [2024-11-20 09:14:06.445193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.445209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.452959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0ff8 00:28:41.037 [2024-11-20 09:14:06.453604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.453620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.461418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f7970 00:28:41.037 [2024-11-20 09:14:06.462082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.462098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.469837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ef6a8 00:28:41.037 [2024-11-20 09:14:06.470512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.470528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.478276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166ecc78 00:28:41.037 [2024-11-20 09:14:06.478898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.478914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.486742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e0630 00:28:41.037 [2024-11-20 09:14:06.487400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.487416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.495206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f5be8 00:28:41.037 [2024-11-20 09:14:06.495866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.495882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.503637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e73e0 00:28:41.037 [2024-11-20 09:14:06.504291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.504307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.512078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e01f8 00:28:41.037 [2024-11-20 09:14:06.512751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.037 [2024-11-20 09:14:06.512767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.037 [2024-11-20 09:14:06.520502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f57b0 00:28:41.038 [2024-11-20 09:14:06.521164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.038 [2024-11-20 09:14:06.521180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.038 [2024-11-20 09:14:06.528936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e9e10 00:28:41.038 [2024-11-20 09:14:06.529579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.038 [2024-11-20 09:14:06.529595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.038 [2024-11-20 09:14:06.537386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f0788 00:28:41.038 [2024-11-20 09:14:06.538048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.038 [2024-11-20 09:14:06.538063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.038 [2024-11-20 09:14:06.545820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166fd208 00:28:41.038 [2024-11-20 09:14:06.546503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.038 [2024-11-20 09:14:06.546519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.038 [2024-11-20 09:14:06.554317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166f31b8 00:28:41.038 [2024-11-20 09:14:06.554995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.038 [2024-11-20 09:14:06.555011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.299 [2024-11-20 09:14:06.562760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f520) with pdu=0x2000166e6738 00:28:41.299 [2024-11-20 09:14:06.563594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.299 [2024-11-20 09:14:06.563611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:41.299 30050.50 IOPS, 117.38 MiB/s 00:28:41.299 Latency(us) 00:28:41.299 [2024-11-20T08:14:06.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.299 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:41.299 nvme0n1 : 2.01 30050.16 117.38 0.00 0.00 4254.35 2075.31 15510.19 00:28:41.299 [2024-11-20T08:14:06.828Z] =================================================================================================================== 00:28:41.299 [2024-11-20T08:14:06.828Z] Total : 30050.16 117.38 0.00 0.00 4254.35 2075.31 15510.19 00:28:41.299 { 00:28:41.299 "results": [ 00:28:41.299 { 00:28:41.299 "job": "nvme0n1", 00:28:41.299 "core_mask": "0x2", 00:28:41.299 "workload": "randwrite", 00:28:41.299 "status": "finished", 00:28:41.299 "queue_depth": 128, 00:28:41.299 "io_size": 4096, 00:28:41.299 "runtime": 2.006412, 00:28:41.299 "iops": 30050.15918963802, 00:28:41.299 "mibps": 117.38343433452351, 00:28:41.299 "io_failed": 0, 00:28:41.299 "io_timeout": 0, 00:28:41.299 "avg_latency_us": 4254.350292515991, 00:28:41.299 "min_latency_us": 2075.306666666667, 00:28:41.299 "max_latency_us": 15510.186666666666 00:28:41.299 } 00:28:41.299 ], 00:28:41.299 "core_count": 1 00:28:41.299 } 00:28:41.299 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:41.299 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:41.299 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:41.299 | .driver_specific 00:28:41.299 | .nvme_error 00:28:41.299 | .status_code 00:28:41.299 | .command_transient_transport_error' 00:28:41.299 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:41.299 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:28:41.299 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 869154 00:28:41.299 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 869154 ']' 00:28:41.299 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 869154 00:28:41.299 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:41.299 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.299 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 869154 00:28:41.560 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:41.560 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:41.560 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 869154' 00:28:41.560 killing process with pid 869154 00:28:41.560 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 869154 00:28:41.560 Received shutdown signal, test time was about 2.000000 seconds 00:28:41.560 00:28:41.560 Latency(us) 00:28:41.560 [2024-11-20T08:14:07.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.560 [2024-11-20T08:14:07.089Z] =================================================================================================================== 00:28:41.560 [2024-11-20T08:14:07.089Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:41.560 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 869154 00:28:41.560 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:41.560 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:41.560 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:41.560 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:41.560 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:41.560 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=869842 00:28:41.561 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 869842 /var/tmp/bperf.sock 00:28:41.561 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 869842 ']' 00:28:41.561 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:41.561 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:41.561 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.561 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:41.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:41.561 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.561 09:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.561 [2024-11-20 09:14:06.986709] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:41.561 [2024-11-20 09:14:06.986766] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869842 ] 00:28:41.561 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:41.561 Zero copy mechanism will not be used. 00:28:41.561 [2024-11-20 09:14:07.069922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.821 [2024-11-20 09:14:07.099223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.392 09:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.392 09:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:42.392 09:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:42.392 09:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:42.652 09:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:42.652 09:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.652 09:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.652 09:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.652 09:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.652 09:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.652 nvme0n1 00:28:42.913 09:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:42.913 09:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.913 09:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.913 09:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.913 09:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:42.913 09:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.913 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:42.913 Zero copy mechanism will not be used. 00:28:42.913 Running I/O for 2 seconds... 00:28:42.913 [2024-11-20 09:14:08.293287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.913 [2024-11-20 09:14:08.293525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.913 [2024-11-20 09:14:08.293550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.913 [2024-11-20 09:14:08.303157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.913 [2024-11-20 09:14:08.303437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.913 [2024-11-20 09:14:08.303457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.913 [2024-11-20 09:14:08.310668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.913 [2024-11-20 09:14:08.310887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.913 [2024-11-20 09:14:08.310903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.913 [2024-11-20 09:14:08.317388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.913 [2024-11-20 09:14:08.317617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.913 [2024-11-20 09:14:08.317633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.327213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.327263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.327279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.333820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.333882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.333902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.341986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.342276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.342291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.350235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.350297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.350313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.358407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.358651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.358667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.366411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.366477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.366492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.375575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.375640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.375656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.381029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.381335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.381352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.390629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.390687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.390703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.397089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.397170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.397186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.401082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.401164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.401180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.406774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.406840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.406855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.416056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.416335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.416351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.422995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.423174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.423190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.427020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.427068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.427084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.434253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.434320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.434335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.914 [2024-11-20 09:14:08.438846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:42.914 [2024-11-20 09:14:08.439114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.914 [2024-11-20 09:14:08.439130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.445717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.445778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.445794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.453850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.453900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.453916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.463779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.464056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.464072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.471936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.472016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.472031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.477042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.477325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.477341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.485525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.485586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.485601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.493623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.493852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.493867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.501722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.502000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.502016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.509482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.509723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.509738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.517626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.517683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.517697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.527604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.527881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.527950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.535852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.536055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.536071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.545259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.545328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.545344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.554690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.175 [2024-11-20 09:14:08.555045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.175 [2024-11-20 09:14:08.555061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.175 [2024-11-20 09:14:08.566107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.566375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.566390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.176 [2024-11-20 09:14:08.578005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.578305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.578321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.176 [2024-11-20 09:14:08.589936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.590175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.590191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.176 [2024-11-20 09:14:08.601717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.601975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.601990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.176 [2024-11-20 09:14:08.613823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.614087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.614101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.176 [2024-11-20 09:14:08.623560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.623752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.623768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.176 [2024-11-20 09:14:08.635038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.635095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.635111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.176 [2024-11-20 09:14:08.645168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.645451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.645468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.176 [2024-11-20 09:14:08.656771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.657008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.657023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.176 [2024-11-20 09:14:08.668343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.668614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.668631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.176 [2024-11-20 09:14:08.679439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.679728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.679744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.176 [2024-11-20 09:14:08.691016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.176 [2024-11-20 09:14:08.691295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.176 [2024-11-20 09:14:08.691311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.702814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.703081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.438 [2024-11-20 09:14:08.703096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.714398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.714644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.438 [2024-11-20 09:14:08.714659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.726043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.726321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.438 [2024-11-20 09:14:08.726343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.737440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.737678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.438 [2024-11-20 09:14:08.737694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.749222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.749565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.438 [2024-11-20 09:14:08.749581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.760345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.760582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.438 [2024-11-20 09:14:08.760598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.771780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.772025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.438 [2024-11-20 09:14:08.772040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.783142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.783447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.438 [2024-11-20 09:14:08.783463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.794658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.794881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.438 [2024-11-20 09:14:08.794897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.806677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.806936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.438 [2024-11-20 09:14:08.806951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.818711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.818971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.438 [2024-11-20 09:14:08.818988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.438 [2024-11-20 09:14:08.830276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.438 [2024-11-20 09:14:08.830519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.830534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.841817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.842044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.842060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.853642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.853845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.853860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.865339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.865601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.865616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.876404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.876668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.876683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.887197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.887494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.887510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.898340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.898657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.898673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.909375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.909613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.909628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.920641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.920888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.920904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.931598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.931908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.931924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.939553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.939629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.939644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.946833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.947064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.947080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.953066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.953138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.953154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.439 [2024-11-20 09:14:08.958908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.439 [2024-11-20 09:14:08.958976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.439 [2024-11-20 09:14:08.958991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:08.966985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:08.967294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:08.967310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:08.972155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:08.972430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:08.972445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:08.983440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:08.983510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:08.983524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:08.989511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:08.989814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:08.989831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:08.997296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:08.997540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:08.997556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.002864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.002939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.002954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.006582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.006832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.006847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.013167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.013242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.013257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.017638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.017900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.017915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.025971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.026050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.026065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.034615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.034671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.034687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.041686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.041918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.041936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.053011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.053282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.053297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.063469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.063674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.063689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.073627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.073892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.073907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.085093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.085331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.085346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.095793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.096025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 09:14:09.096040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 09:14:09.106655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.702 [2024-11-20 09:14:09.106914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.106931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.117745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.117809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.117824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.125972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.126288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.126310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.131708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.131783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.131799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.137276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.137327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.137343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.143524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.143825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.143842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.149988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.150285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.150301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.157553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.157636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.157651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.161852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.161911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.161927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.169435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.169704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.169720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.176055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.176136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.176152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.183545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.183603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.183619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.192157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.192421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.192436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.197545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.197611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.197627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.202658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.202748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.202763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.209197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.209512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.209529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 09:14:09.219008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.703 [2024-11-20 09:14:09.219065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 09:14:09.219080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.964 [2024-11-20 09:14:09.228330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.964 [2024-11-20 09:14:09.228616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.964 [2024-11-20 09:14:09.228639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.964 [2024-11-20 09:14:09.233590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.964 [2024-11-20 09:14:09.233673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.964 [2024-11-20 09:14:09.233688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.964 [2024-11-20 09:14:09.239426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.964 [2024-11-20 09:14:09.239485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.964 [2024-11-20 09:14:09.239501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.964 [2024-11-20 09:14:09.244950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.964 [2024-11-20 09:14:09.245013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.964 [2024-11-20 09:14:09.245030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.964 [2024-11-20 09:14:09.252999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.253298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.253321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.258981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.259065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.259081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.267399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.267460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.267475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.275993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.276052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.276067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.965 3532.00 IOPS, 441.50 MiB/s [2024-11-20T08:14:09.494Z] [2024-11-20 09:14:09.285842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.285904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.285918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.293433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.293512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.293527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.302462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.302518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.302533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.312574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.312632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.312648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.319997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.320290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.320305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.328953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.329016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.329032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.339569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.339839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.339855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.351216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.351478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.351494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.362589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.362832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.362848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.374688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.374937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.374955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.381009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.381273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.381289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.387836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.387898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.387913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.396536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.396790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.396806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.404882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.405118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.405134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.414043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.414105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.414120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.421079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.421314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.421330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.430517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.430718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.430733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.440980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.441265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.441284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.451817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.452087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.965 [2024-11-20 09:14:09.452104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.965 [2024-11-20 09:14:09.462937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.965 [2024-11-20 09:14:09.463208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.966 [2024-11-20 09:14:09.463224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.966 [2024-11-20 09:14:09.473224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.966 [2024-11-20 09:14:09.473539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.966 [2024-11-20 09:14:09.473555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.966 [2024-11-20 09:14:09.484315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:43.966 [2024-11-20 09:14:09.484604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.966 [2024-11-20 09:14:09.484623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.227 [2024-11-20 09:14:09.495932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.227 [2024-11-20 09:14:09.496236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.227 [2024-11-20 09:14:09.496253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.227 [2024-11-20 09:14:09.506890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.227 [2024-11-20 09:14:09.507221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.227 [2024-11-20 09:14:09.507238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.227 [2024-11-20 09:14:09.517259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.227 [2024-11-20 09:14:09.517560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.227 [2024-11-20 09:14:09.517576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.227 [2024-11-20 09:14:09.527893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.227 [2024-11-20 09:14:09.528096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.227 [2024-11-20 09:14:09.528111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.227 [2024-11-20 09:14:09.537986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.227 [2024-11-20 09:14:09.538313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.227 [2024-11-20 09:14:09.538328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.227 [2024-11-20 09:14:09.548441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.227 [2024-11-20 09:14:09.548705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.227 [2024-11-20 09:14:09.548721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.227 [2024-11-20 09:14:09.556659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.227 [2024-11-20 09:14:09.556790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.227 [2024-11-20 09:14:09.556805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.227 [2024-11-20 09:14:09.566592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.227 [2024-11-20 09:14:09.566785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.227 [2024-11-20 09:14:09.566801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.227 [2024-11-20 09:14:09.576519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.227 [2024-11-20 09:14:09.576812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.227 [2024-11-20 09:14:09.576827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.227 [2024-11-20 09:14:09.586315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.227 [2024-11-20 09:14:09.586589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.227 [2024-11-20 09:14:09.586606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.227 [2024-11-20 09:14:09.593793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.227 [2024-11-20 09:14:09.593971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.593987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.600292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.600435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.600450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.606769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.607168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.607184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.615349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.615587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.615603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.621497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.621688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.621757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.629041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.629333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.629349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.635797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.636011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.636027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.644242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.644604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.644620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.651287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.651656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.651672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.657672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.657898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.657914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.662942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.663100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.663116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.666076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.666231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.666247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.674318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.674595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.674611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.679428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.679773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.679790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.683916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.684092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.684108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.687113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.687274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.687293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.695736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.696107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.696125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.703359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.703414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.703428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.711227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.711297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.711311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.718976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.719029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.719044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.726852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.726902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.726917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.734481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.734547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.734562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.739951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.740008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.740023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.228 [2024-11-20 09:14:09.746594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.228 [2024-11-20 09:14:09.746635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.228 [2024-11-20 09:14:09.746650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.489 [2024-11-20 09:14:09.753518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.489 [2024-11-20 09:14:09.753568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.489 [2024-11-20 09:14:09.753584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.489 [2024-11-20 09:14:09.758262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.489 [2024-11-20 09:14:09.758306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.489 [2024-11-20 09:14:09.758321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.489 [2024-11-20 09:14:09.764414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.489 [2024-11-20 09:14:09.764516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.489 [2024-11-20 09:14:09.764532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.489 [2024-11-20 09:14:09.770238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.489 [2024-11-20 09:14:09.770296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.489 [2024-11-20 09:14:09.770311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.489 [2024-11-20 09:14:09.778334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.489 [2024-11-20 09:14:09.778399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.489 [2024-11-20 09:14:09.778414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.489 [2024-11-20 09:14:09.785829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.489 [2024-11-20 09:14:09.785880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.489 [2024-11-20 09:14:09.785895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.489 [2024-11-20 09:14:09.795463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.489 [2024-11-20 09:14:09.795535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.489 [2024-11-20 09:14:09.795551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.489 [2024-11-20 09:14:09.804740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.489 [2024-11-20 09:14:09.804789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.804805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.813292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.813355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.813370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.821461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.821625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.821641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.828254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.828296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.828311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.836505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.836565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.836581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.847117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.847427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.847445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.858649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.858917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.858932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.869025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.869305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.869320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.879420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.879679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.879695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.889733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.890038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.890054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.899969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.900269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.900288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.910507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.910761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.910777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.921362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.921427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.921441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.932235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.932532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.932548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.941761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.941969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.941985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.951824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.952093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.952109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.962502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.962756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.962773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.973437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.973703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.973718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.984099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.984401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.984417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:09.994325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:09.994582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:09.994597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:10.004804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:10.005114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:10.005132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.490 [2024-11-20 09:14:10.014639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.490 [2024-11-20 09:14:10.014778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.490 [2024-11-20 09:14:10.014793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.025086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.025328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.025343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.036477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.036725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.036740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.044486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.044598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.044613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.053665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.053959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.053976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.064772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.064829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.064844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.070781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.071072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.071088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.077639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.077700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.077715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.083156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.083371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.083386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.091943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.092202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.092217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.096721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.096783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.096798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.103875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.104166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.104181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.113450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.113510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.113525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.121568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.121819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.121834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.130623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.130705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.130720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.138061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.138351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.138370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.146175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.146429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.146445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.151942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.152004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.152020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.752 [2024-11-20 09:14:10.156971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.752 [2024-11-20 09:14:10.157031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.752 [2024-11-20 09:14:10.157046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.163880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.164194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.164210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.172067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.172360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.172376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.178528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.178632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.178647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.186035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.186081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.186096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.195407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.195633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.195648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.203664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.203727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.203743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.209170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.209466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.209481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.216513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.216576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.216591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.224565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.224617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.224632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.232196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.232254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.232270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.237679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.237920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.237935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.246435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.246487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.246503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.254613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.254658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.254673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.260426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.260670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.260685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.268020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.268329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.268346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.753 [2024-11-20 09:14:10.276941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:44.753 [2024-11-20 09:14:10.277027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.753 [2024-11-20 09:14:10.277042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.014 [2024-11-20 09:14:10.284966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208f860) with pdu=0x2000166ff3c8 00:28:45.014 [2024-11-20 09:14:10.285030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.014 [2024-11-20 09:14:10.285045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.014 3613.00 IOPS, 451.62 MiB/s 00:28:45.014 Latency(us) 00:28:45.014 [2024-11-20T08:14:10.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.014 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:45.014 nvme0n1 : 2.00 3612.92 451.62 0.00 0.00 4422.67 1447.25 17367.04 00:28:45.014 [2024-11-20T08:14:10.543Z] =================================================================================================================== 00:28:45.014 [2024-11-20T08:14:10.543Z] Total : 3612.92 451.62 0.00 0.00 4422.67 1447.25 17367.04 00:28:45.014 { 00:28:45.014 "results": [ 00:28:45.014 { 00:28:45.014 "job": "nvme0n1", 00:28:45.014 "core_mask": "0x2", 00:28:45.014 "workload": "randwrite", 00:28:45.014 "status": "finished", 00:28:45.014 "queue_depth": 16, 00:28:45.014 "io_size": 131072, 00:28:45.014 "runtime": 2.004747, 00:28:45.014 "iops": 3612.924723169557, 00:28:45.014 "mibps": 451.61559039619465, 00:28:45.014 "io_failed": 0, 00:28:45.014 "io_timeout": 0, 00:28:45.014 "avg_latency_us": 4422.668676883428, 00:28:45.014 "min_latency_us": 1447.2533333333333, 00:28:45.014 "max_latency_us": 17367.04 00:28:45.014 } 00:28:45.014 ], 00:28:45.014 "core_count": 1 00:28:45.014 } 00:28:45.014 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:45.014 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:45.014 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:45.014 | .driver_specific 00:28:45.014 | .nvme_error 00:28:45.014 | .status_code 00:28:45.014 | .command_transient_transport_error' 00:28:45.014 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:45.014 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 234 > 0 )) 00:28:45.014 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 869842 00:28:45.014 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 869842 ']' 00:28:45.014 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 869842 00:28:45.014 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:45.014 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.014 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 869842 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 869842' 00:28:45.276 killing process with pid 869842 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 869842 00:28:45.276 Received shutdown signal, test time was about 2.000000 seconds 00:28:45.276 00:28:45.276 Latency(us) 00:28:45.276 [2024-11-20T08:14:10.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.276 [2024-11-20T08:14:10.805Z] =================================================================================================================== 00:28:45.276 [2024-11-20T08:14:10.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 869842 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 867428 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 867428 ']' 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 867428 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 867428 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 867428' 00:28:45.276 killing process with pid 867428 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 867428 00:28:45.276 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 867428 00:28:45.537 00:28:45.537 real 0m16.537s 00:28:45.537 user 0m32.797s 00:28:45.537 sys 0m3.546s 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.537 ************************************ 00:28:45.537 END TEST nvmf_digest_error 00:28:45.537 ************************************ 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.537 rmmod nvme_tcp 00:28:45.537 rmmod nvme_fabrics 00:28:45.537 rmmod nvme_keyring 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 867428 ']' 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 867428 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 867428 ']' 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 867428 00:28:45.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (867428) - No such process 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 867428 is not found' 00:28:45.537 Process with pid 867428 is not found 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.537 09:14:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:48.083 00:28:48.083 real 0m43.556s 00:28:48.083 user 1m8.620s 00:28:48.083 sys 0m13.028s 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:48.083 ************************************ 00:28:48.083 END TEST nvmf_digest 00:28:48.083 ************************************ 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.083 ************************************ 00:28:48.083 START TEST nvmf_bdevperf 00:28:48.083 ************************************ 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:48.083 * Looking for test storage... 00:28:48.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:48.083 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:48.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.084 --rc genhtml_branch_coverage=1 00:28:48.084 --rc genhtml_function_coverage=1 00:28:48.084 --rc genhtml_legend=1 00:28:48.084 --rc geninfo_all_blocks=1 00:28:48.084 --rc geninfo_unexecuted_blocks=1 00:28:48.084 00:28:48.084 ' 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:48.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.084 --rc genhtml_branch_coverage=1 00:28:48.084 --rc genhtml_function_coverage=1 00:28:48.084 --rc genhtml_legend=1 00:28:48.084 --rc geninfo_all_blocks=1 00:28:48.084 --rc geninfo_unexecuted_blocks=1 00:28:48.084 00:28:48.084 ' 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:48.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.084 --rc genhtml_branch_coverage=1 00:28:48.084 --rc genhtml_function_coverage=1 00:28:48.084 --rc genhtml_legend=1 00:28:48.084 --rc geninfo_all_blocks=1 00:28:48.084 --rc geninfo_unexecuted_blocks=1 00:28:48.084 00:28:48.084 ' 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:48.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.084 --rc genhtml_branch_coverage=1 00:28:48.084 --rc genhtml_function_coverage=1 00:28:48.084 --rc genhtml_legend=1 00:28:48.084 --rc geninfo_all_blocks=1 00:28:48.084 --rc geninfo_unexecuted_blocks=1 00:28:48.084 00:28:48.084 ' 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:48.084 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.085 09:14:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.223 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:56.224 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:56.224 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:56.224 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:56.224 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:56.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:28:56.224 00:28:56.224 --- 10.0.0.2 ping statistics --- 00:28:56.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.224 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:28:56.224 00:28:56.224 --- 10.0.0.1 ping statistics --- 00:28:56.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.224 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=874861 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 874861 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 874861 ']' 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.224 09:14:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.224 [2024-11-20 09:14:20.923374] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:56.224 [2024-11-20 09:14:20.923442] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.224 [2024-11-20 09:14:21.026070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:56.224 [2024-11-20 09:14:21.078264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.224 [2024-11-20 09:14:21.078317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.225 [2024-11-20 09:14:21.078325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.225 [2024-11-20 09:14:21.078333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.225 [2024-11-20 09:14:21.078339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.225 [2024-11-20 09:14:21.080230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.225 [2024-11-20 09:14:21.080421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:56.225 [2024-11-20 09:14:21.080422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.485 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.485 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:56.485 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.485 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.485 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.485 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.486 [2024-11-20 09:14:21.805630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.486 Malloc0 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.486 [2024-11-20 09:14:21.882744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:56.486 { 00:28:56.486 "params": { 00:28:56.486 "name": "Nvme$subsystem", 00:28:56.486 "trtype": "$TEST_TRANSPORT", 00:28:56.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.486 "adrfam": "ipv4", 00:28:56.486 "trsvcid": "$NVMF_PORT", 00:28:56.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.486 "hdgst": ${hdgst:-false}, 00:28:56.486 "ddgst": ${ddgst:-false} 00:28:56.486 }, 00:28:56.486 "method": "bdev_nvme_attach_controller" 00:28:56.486 } 00:28:56.486 EOF 00:28:56.486 )") 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:56.486 09:14:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:56.486 "params": { 00:28:56.486 "name": "Nvme1", 00:28:56.486 "trtype": "tcp", 00:28:56.486 "traddr": "10.0.0.2", 00:28:56.486 "adrfam": "ipv4", 00:28:56.486 "trsvcid": "4420", 00:28:56.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:56.486 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:56.486 "hdgst": false, 00:28:56.486 "ddgst": false 00:28:56.486 }, 00:28:56.486 "method": "bdev_nvme_attach_controller" 00:28:56.486 }' 00:28:56.486 [2024-11-20 09:14:21.940936] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:56.486 [2024-11-20 09:14:21.941009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874929 ] 00:28:56.746 [2024-11-20 09:14:22.033123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.746 [2024-11-20 09:14:22.086042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.746 Running I/O for 1 seconds... 00:28:58.129 8463.00 IOPS, 33.06 MiB/s 00:28:58.129 Latency(us) 00:28:58.129 [2024-11-20T08:14:23.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.129 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:58.129 Verification LBA range: start 0x0 length 0x4000 00:28:58.129 Nvme1n1 : 1.01 8538.00 33.35 0.00 0.00 14922.64 2867.20 14527.15 00:28:58.129 [2024-11-20T08:14:23.658Z] =================================================================================================================== 00:28:58.129 [2024-11-20T08:14:23.658Z] Total : 8538.00 33.35 0.00 0.00 14922.64 2867.20 14527.15 00:28:58.129 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=875238 00:28:58.129 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:58.129 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:58.129 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:58.129 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:58.129 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:58.129 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.129 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.129 { 00:28:58.129 "params": { 00:28:58.130 "name": "Nvme$subsystem", 00:28:58.130 "trtype": "$TEST_TRANSPORT", 00:28:58.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.130 "adrfam": "ipv4", 00:28:58.130 "trsvcid": "$NVMF_PORT", 00:28:58.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.130 "hdgst": ${hdgst:-false}, 00:28:58.130 "ddgst": ${ddgst:-false} 00:28:58.130 }, 00:28:58.130 "method": "bdev_nvme_attach_controller" 00:28:58.130 } 00:28:58.130 EOF 00:28:58.130 )") 00:28:58.130 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:58.130 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:58.130 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:58.130 09:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:58.130 "params": { 00:28:58.130 "name": "Nvme1", 00:28:58.130 "trtype": "tcp", 00:28:58.130 "traddr": "10.0.0.2", 00:28:58.130 "adrfam": "ipv4", 00:28:58.130 "trsvcid": "4420", 00:28:58.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:58.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:58.130 "hdgst": false, 00:28:58.130 "ddgst": false 00:28:58.130 }, 00:28:58.130 "method": "bdev_nvme_attach_controller" 00:28:58.130 }' 00:28:58.130 [2024-11-20 09:14:23.462184] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:28:58.130 [2024-11-20 09:14:23.462265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875238 ] 00:28:58.130 [2024-11-20 09:14:23.556471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.130 [2024-11-20 09:14:23.609246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.390 Running I/O for 15 seconds... 00:29:00.712 11112.00 IOPS, 43.41 MiB/s [2024-11-20T08:14:26.503Z] 11157.00 IOPS, 43.58 MiB/s [2024-11-20T08:14:26.503Z] 09:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 874861 00:29:00.974 09:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:00.974 [2024-11-20 09:14:26.425398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.974 [2024-11-20 09:14:26.425439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.974 [2024-11-20 09:14:26.425459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.974 [2024-11-20 09:14:26.425469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.974 [2024-11-20 09:14:26.425481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.974 [2024-11-20 09:14:26.425491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.974 [2024-11-20 09:14:26.425501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.974 [2024-11-20 09:14:26.425509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.974 [2024-11-20 09:14:26.425519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.974 [2024-11-20 09:14:26.425528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.974 [2024-11-20 09:14:26.425541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.974 [2024-11-20 09:14:26.425551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.974 [2024-11-20 09:14:26.425561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.975 [2024-11-20 09:14:26.425569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.975 [2024-11-20 09:14:26.425586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.425990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.425999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.975 [2024-11-20 09:14:26.426546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.975 [2024-11-20 09:14:26.426562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.975 [2024-11-20 09:14:26.426579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.975 [2024-11-20 09:14:26.426596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.975 [2024-11-20 09:14:26.426606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.426802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.426819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.426836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.426853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.426869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.426885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.426902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.426919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.426935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.426953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.426970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.426986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.426995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.976 [2024-11-20 09:14:26.427358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.427376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.427393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.427411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.427427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.427444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.427461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.427477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.427495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.427512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.427528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.976 [2024-11-20 09:14:26.427545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.976 [2024-11-20 09:14:26.427554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.977 [2024-11-20 09:14:26.427562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.977 [2024-11-20 09:14:26.427571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.977 [2024-11-20 09:14:26.427578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.977 [2024-11-20 09:14:26.427588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.977 [2024-11-20 09:14:26.427596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.977 [2024-11-20 09:14:26.427605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.977 [2024-11-20 09:14:26.427613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.977 [2024-11-20 09:14:26.427622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.977 [2024-11-20 09:14:26.427629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.977 [2024-11-20 09:14:26.427638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c2390 is same with the state(6) to be set 00:29:00.977 [2024-11-20 09:14:26.427647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:00.977 [2024-11-20 09:14:26.427653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:00.977 [2024-11-20 09:14:26.427660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93832 len:8 PRP1 0x0 PRP2 0x0 00:29:00.977 [2024-11-20 09:14:26.427670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.977 [2024-11-20 09:14:26.431212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:00.977 [2024-11-20 09:14:26.431263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:00.977 [2024-11-20 09:14:26.432052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.977 [2024-11-20 09:14:26.432069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:00.977 [2024-11-20 09:14:26.432077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:00.977 [2024-11-20 09:14:26.432302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:00.977 [2024-11-20 09:14:26.432524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:00.977 [2024-11-20 09:14:26.432532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:00.977 [2024-11-20 09:14:26.432541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:00.977 [2024-11-20 09:14:26.432549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:00.977 [2024-11-20 09:14:26.445285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:00.977 [2024-11-20 09:14:26.445880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.977 [2024-11-20 09:14:26.445919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:00.977 [2024-11-20 09:14:26.445930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:00.977 [2024-11-20 09:14:26.446183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:00.977 [2024-11-20 09:14:26.446407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:00.977 [2024-11-20 09:14:26.446416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:00.977 [2024-11-20 09:14:26.446425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:00.977 [2024-11-20 09:14:26.446439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:00.977 [2024-11-20 09:14:26.459185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:00.977 [2024-11-20 09:14:26.459885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.977 [2024-11-20 09:14:26.459923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:00.977 [2024-11-20 09:14:26.459934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:00.977 [2024-11-20 09:14:26.460185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:00.977 [2024-11-20 09:14:26.460409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:00.977 [2024-11-20 09:14:26.460418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:00.977 [2024-11-20 09:14:26.460427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:00.977 [2024-11-20 09:14:26.460435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:00.977 [2024-11-20 09:14:26.472977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:00.977 [2024-11-20 09:14:26.473642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.977 [2024-11-20 09:14:26.473682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:00.977 [2024-11-20 09:14:26.473693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:00.977 [2024-11-20 09:14:26.473933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:00.977 [2024-11-20 09:14:26.474156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:00.977 [2024-11-20 09:14:26.474174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:00.977 [2024-11-20 09:14:26.474182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:00.977 [2024-11-20 09:14:26.474190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:00.977 [2024-11-20 09:14:26.486935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:00.977 [2024-11-20 09:14:26.487619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.977 [2024-11-20 09:14:26.487660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:00.977 [2024-11-20 09:14:26.487673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:00.977 [2024-11-20 09:14:26.487914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:00.977 [2024-11-20 09:14:26.488137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:00.977 [2024-11-20 09:14:26.488146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:00.977 [2024-11-20 09:14:26.488154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:00.977 [2024-11-20 09:14:26.488173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.238 [2024-11-20 09:14:26.500923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.238 [2024-11-20 09:14:26.501640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.238 [2024-11-20 09:14:26.501682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.238 [2024-11-20 09:14:26.501693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.238 [2024-11-20 09:14:26.501934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.238 [2024-11-20 09:14:26.502169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.238 [2024-11-20 09:14:26.502180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.238 [2024-11-20 09:14:26.502187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.238 [2024-11-20 09:14:26.502195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.238 [2024-11-20 09:14:26.514746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.238 [2024-11-20 09:14:26.515399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.238 [2024-11-20 09:14:26.515441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.238 [2024-11-20 09:14:26.515452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.238 [2024-11-20 09:14:26.515693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.238 [2024-11-20 09:14:26.515916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.238 [2024-11-20 09:14:26.515924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.238 [2024-11-20 09:14:26.515933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.238 [2024-11-20 09:14:26.515941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.238 [2024-11-20 09:14:26.528699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.238 [2024-11-20 09:14:26.529249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.238 [2024-11-20 09:14:26.529270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.238 [2024-11-20 09:14:26.529278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.238 [2024-11-20 09:14:26.529498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.238 [2024-11-20 09:14:26.529717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.238 [2024-11-20 09:14:26.529726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.238 [2024-11-20 09:14:26.529733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.238 [2024-11-20 09:14:26.529740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.238 [2024-11-20 09:14:26.542500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.238 [2024-11-20 09:14:26.543154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.238 [2024-11-20 09:14:26.543204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.238 [2024-11-20 09:14:26.543216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.238 [2024-11-20 09:14:26.543463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.238 [2024-11-20 09:14:26.543687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.238 [2024-11-20 09:14:26.543697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.238 [2024-11-20 09:14:26.543704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.238 [2024-11-20 09:14:26.543713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.238 [2024-11-20 09:14:26.556481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.238 [2024-11-20 09:14:26.557039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.238 [2024-11-20 09:14:26.557060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.238 [2024-11-20 09:14:26.557068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.238 [2024-11-20 09:14:26.557296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.238 [2024-11-20 09:14:26.557516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.238 [2024-11-20 09:14:26.557525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.238 [2024-11-20 09:14:26.557533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.238 [2024-11-20 09:14:26.557540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.238 [2024-11-20 09:14:26.570319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.238 [2024-11-20 09:14:26.570856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.238 [2024-11-20 09:14:26.570903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.238 [2024-11-20 09:14:26.570915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.238 [2024-11-20 09:14:26.571169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.238 [2024-11-20 09:14:26.571396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.238 [2024-11-20 09:14:26.571405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.238 [2024-11-20 09:14:26.571413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.238 [2024-11-20 09:14:26.571422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.238 [2024-11-20 09:14:26.584179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.238 [2024-11-20 09:14:26.584812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.238 [2024-11-20 09:14:26.584860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.238 [2024-11-20 09:14:26.584871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.238 [2024-11-20 09:14:26.585117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.238 [2024-11-20 09:14:26.585355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.238 [2024-11-20 09:14:26.585371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.238 [2024-11-20 09:14:26.585379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.238 [2024-11-20 09:14:26.585388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.238 [2024-11-20 09:14:26.598155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.238 [2024-11-20 09:14:26.598789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.238 [2024-11-20 09:14:26.598842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.239 [2024-11-20 09:14:26.598853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.239 [2024-11-20 09:14:26.599101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.239 [2024-11-20 09:14:26.599340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.239 [2024-11-20 09:14:26.599351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.239 [2024-11-20 09:14:26.599360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.239 [2024-11-20 09:14:26.599371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.239 [2024-11-20 09:14:26.612150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.239 [2024-11-20 09:14:26.612855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.239 [2024-11-20 09:14:26.612910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.239 [2024-11-20 09:14:26.612922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.239 [2024-11-20 09:14:26.613182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.239 [2024-11-20 09:14:26.613409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.239 [2024-11-20 09:14:26.613418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.239 [2024-11-20 09:14:26.613426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.239 [2024-11-20 09:14:26.613435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.239 [2024-11-20 09:14:26.626064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.239 [2024-11-20 09:14:26.626688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.239 [2024-11-20 09:14:26.626746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.239 [2024-11-20 09:14:26.626758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.239 [2024-11-20 09:14:26.627010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.239 [2024-11-20 09:14:26.627255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.239 [2024-11-20 09:14:26.627267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.239 [2024-11-20 09:14:26.627275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.239 [2024-11-20 09:14:26.627291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.239 [2024-11-20 09:14:26.640052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.239 [2024-11-20 09:14:26.640791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.239 [2024-11-20 09:14:26.640854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.239 [2024-11-20 09:14:26.640866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.239 [2024-11-20 09:14:26.641121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.239 [2024-11-20 09:14:26.641365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.239 [2024-11-20 09:14:26.641375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.239 [2024-11-20 09:14:26.641384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.239 [2024-11-20 09:14:26.641392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.239 [2024-11-20 09:14:26.654063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.239 [2024-11-20 09:14:26.654770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.239 [2024-11-20 09:14:26.654832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.239 [2024-11-20 09:14:26.654845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.239 [2024-11-20 09:14:26.655100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.239 [2024-11-20 09:14:26.655344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.239 [2024-11-20 09:14:26.655354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.239 [2024-11-20 09:14:26.655363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.239 [2024-11-20 09:14:26.655372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.239 [2024-11-20 09:14:26.667940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.239 [2024-11-20 09:14:26.668679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.239 [2024-11-20 09:14:26.668741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.239 [2024-11-20 09:14:26.668753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.239 [2024-11-20 09:14:26.669009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.239 [2024-11-20 09:14:26.669250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.239 [2024-11-20 09:14:26.669260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.239 [2024-11-20 09:14:26.669268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.239 [2024-11-20 09:14:26.669277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.239 [2024-11-20 09:14:26.681842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.239 [2024-11-20 09:14:26.682604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.239 [2024-11-20 09:14:26.682666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.239 [2024-11-20 09:14:26.682680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.239 [2024-11-20 09:14:26.682936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.239 [2024-11-20 09:14:26.683179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.239 [2024-11-20 09:14:26.683189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.239 [2024-11-20 09:14:26.683197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.239 [2024-11-20 09:14:26.683206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.239 [2024-11-20 09:14:26.695801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.239 [2024-11-20 09:14:26.696283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.239 [2024-11-20 09:14:26.696318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.239 [2024-11-20 09:14:26.696328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.239 [2024-11-20 09:14:26.696555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.239 [2024-11-20 09:14:26.696779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.239 [2024-11-20 09:14:26.696794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.239 [2024-11-20 09:14:26.696803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.239 [2024-11-20 09:14:26.696811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.239 [2024-11-20 09:14:26.709821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.239 [2024-11-20 09:14:26.710468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.239 [2024-11-20 09:14:26.710531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.239 [2024-11-20 09:14:26.710545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.239 [2024-11-20 09:14:26.710800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.239 [2024-11-20 09:14:26.711028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.239 [2024-11-20 09:14:26.711039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.239 [2024-11-20 09:14:26.711047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.239 [2024-11-20 09:14:26.711058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.239 [2024-11-20 09:14:26.723647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.239 [2024-11-20 09:14:26.724349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.239 [2024-11-20 09:14:26.724411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.239 [2024-11-20 09:14:26.724423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.239 [2024-11-20 09:14:26.724687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.239 [2024-11-20 09:14:26.724913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.239 [2024-11-20 09:14:26.724923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.239 [2024-11-20 09:14:26.724931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.239 [2024-11-20 09:14:26.724940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.240 [2024-11-20 09:14:26.737518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.240 [2024-11-20 09:14:26.738149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.240 [2024-11-20 09:14:26.738222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.240 [2024-11-20 09:14:26.738236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.240 [2024-11-20 09:14:26.738493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.240 [2024-11-20 09:14:26.738720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.240 [2024-11-20 09:14:26.738731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.240 [2024-11-20 09:14:26.738739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.240 [2024-11-20 09:14:26.738748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.240 [2024-11-20 09:14:26.751340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.240 [2024-11-20 09:14:26.752061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.240 [2024-11-20 09:14:26.752123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.240 [2024-11-20 09:14:26.752136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.240 [2024-11-20 09:14:26.752406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.240 [2024-11-20 09:14:26.752635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.240 [2024-11-20 09:14:26.752644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.240 [2024-11-20 09:14:26.752652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.240 [2024-11-20 09:14:26.752661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.502 [2024-11-20 09:14:26.765279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.502 [2024-11-20 09:14:26.766012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.502 [2024-11-20 09:14:26.766074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.502 [2024-11-20 09:14:26.766087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.502 [2024-11-20 09:14:26.766358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.502 [2024-11-20 09:14:26.766586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.502 [2024-11-20 09:14:26.766603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.502 [2024-11-20 09:14:26.766612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.502 [2024-11-20 09:14:26.766621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.502 [2024-11-20 09:14:26.779254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.502 [2024-11-20 09:14:26.779976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.502 [2024-11-20 09:14:26.780037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.502 [2024-11-20 09:14:26.780050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.502 [2024-11-20 09:14:26.780322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.502 [2024-11-20 09:14:26.780550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.502 [2024-11-20 09:14:26.780559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.502 [2024-11-20 09:14:26.780569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.502 [2024-11-20 09:14:26.780578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.502 [2024-11-20 09:14:26.793187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.502 [2024-11-20 09:14:26.793807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.502 [2024-11-20 09:14:26.793837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.502 [2024-11-20 09:14:26.793846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.502 [2024-11-20 09:14:26.794069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.502 [2024-11-20 09:14:26.794302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.502 [2024-11-20 09:14:26.794313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.502 [2024-11-20 09:14:26.794321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.502 [2024-11-20 09:14:26.794329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.502 [2024-11-20 09:14:26.807131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.502 [2024-11-20 09:14:26.807836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.502 [2024-11-20 09:14:26.807898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.502 [2024-11-20 09:14:26.807911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.502 [2024-11-20 09:14:26.808182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.502 [2024-11-20 09:14:26.808409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.502 [2024-11-20 09:14:26.808419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.502 [2024-11-20 09:14:26.808427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.502 [2024-11-20 09:14:26.808443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.502 [2024-11-20 09:14:26.821025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.502 [2024-11-20 09:14:26.821763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.502 [2024-11-20 09:14:26.821826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.502 [2024-11-20 09:14:26.821838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.502 [2024-11-20 09:14:26.822094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.502 [2024-11-20 09:14:26.822335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.502 [2024-11-20 09:14:26.822346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.502 [2024-11-20 09:14:26.822354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.502 [2024-11-20 09:14:26.822363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.502 [2024-11-20 09:14:26.834934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.502 [2024-11-20 09:14:26.835540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.502 [2024-11-20 09:14:26.835569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.502 [2024-11-20 09:14:26.835578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.502 [2024-11-20 09:14:26.835800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.502 [2024-11-20 09:14:26.836022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.502 [2024-11-20 09:14:26.836032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.502 [2024-11-20 09:14:26.836040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.502 [2024-11-20 09:14:26.836047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.502 [2024-11-20 09:14:26.848842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.502 [2024-11-20 09:14:26.849422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.502 [2024-11-20 09:14:26.849447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.502 [2024-11-20 09:14:26.849455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.502 [2024-11-20 09:14:26.849677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.502 [2024-11-20 09:14:26.849897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.502 [2024-11-20 09:14:26.849907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.502 [2024-11-20 09:14:26.849915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.502 [2024-11-20 09:14:26.849923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.502 [2024-11-20 09:14:26.862717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.502 [2024-11-20 09:14:26.863299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.502 [2024-11-20 09:14:26.863323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.502 [2024-11-20 09:14:26.863331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.502 [2024-11-20 09:14:26.863552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.502 [2024-11-20 09:14:26.863773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.502 [2024-11-20 09:14:26.863781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.502 [2024-11-20 09:14:26.863789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.502 [2024-11-20 09:14:26.863798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.502 [2024-11-20 09:14:26.876607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.502 [2024-11-20 09:14:26.877262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.502 [2024-11-20 09:14:26.877326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.502 [2024-11-20 09:14:26.877340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.502 [2024-11-20 09:14:26.877596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.502 [2024-11-20 09:14:26.877822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.502 [2024-11-20 09:14:26.877833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.502 [2024-11-20 09:14:26.877841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.503 [2024-11-20 09:14:26.877850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.503 [2024-11-20 09:14:26.890430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.503 [2024-11-20 09:14:26.891185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.503 [2024-11-20 09:14:26.891247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.503 [2024-11-20 09:14:26.891260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.503 [2024-11-20 09:14:26.891515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.503 [2024-11-20 09:14:26.891742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.503 [2024-11-20 09:14:26.891751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.503 [2024-11-20 09:14:26.891759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.503 [2024-11-20 09:14:26.891767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.503 [2024-11-20 09:14:26.904338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.503 [2024-11-20 09:14:26.905056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.503 [2024-11-20 09:14:26.905118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.503 [2024-11-20 09:14:26.905131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.503 [2024-11-20 09:14:26.905408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.503 [2024-11-20 09:14:26.905643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.503 [2024-11-20 09:14:26.905652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.503 [2024-11-20 09:14:26.905661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.503 [2024-11-20 09:14:26.905670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.503 9358.33 IOPS, 36.56 MiB/s [2024-11-20T08:14:27.032Z] [2024-11-20 09:14:26.918258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.503 [2024-11-20 09:14:26.918990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.503 [2024-11-20 09:14:26.919052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.503 [2024-11-20 09:14:26.919065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.503 [2024-11-20 09:14:26.919335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.503 [2024-11-20 09:14:26.919562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.503 [2024-11-20 09:14:26.919573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.503 [2024-11-20 09:14:26.919582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.503 [2024-11-20 09:14:26.919591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.503 [2024-11-20 09:14:26.932164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.503 [2024-11-20 09:14:26.932919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.503 [2024-11-20 09:14:26.932981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.503 [2024-11-20 09:14:26.932996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.503 [2024-11-20 09:14:26.933267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.503 [2024-11-20 09:14:26.933495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.503 [2024-11-20 09:14:26.933504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.503 [2024-11-20 09:14:26.933513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.503 [2024-11-20 09:14:26.933523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.503 [2024-11-20 09:14:26.946096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.503 [2024-11-20 09:14:26.946817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.503 [2024-11-20 09:14:26.946880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.503 [2024-11-20 09:14:26.946892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.503 [2024-11-20 09:14:26.947147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.503 [2024-11-20 09:14:26.947393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.503 [2024-11-20 09:14:26.947403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.503 [2024-11-20 09:14:26.947411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.503 [2024-11-20 09:14:26.947420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.503 [2024-11-20 09:14:26.959984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.503 [2024-11-20 09:14:26.960722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.503 [2024-11-20 09:14:26.960785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.503 [2024-11-20 09:14:26.960798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.503 [2024-11-20 09:14:26.961053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.503 [2024-11-20 09:14:26.961294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.503 [2024-11-20 09:14:26.961304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.503 [2024-11-20 09:14:26.961312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.503 [2024-11-20 09:14:26.961321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.503 [2024-11-20 09:14:26.973909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.503 [2024-11-20 09:14:26.974647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.503 [2024-11-20 09:14:26.974709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.503 [2024-11-20 09:14:26.974721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.503 [2024-11-20 09:14:26.974976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.503 [2024-11-20 09:14:26.975218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.503 [2024-11-20 09:14:26.975229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.503 [2024-11-20 09:14:26.975238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.503 [2024-11-20 09:14:26.975247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.503 [2024-11-20 09:14:26.987811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.503 [2024-11-20 09:14:26.988407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.503 [2024-11-20 09:14:26.988437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.503 [2024-11-20 09:14:26.988445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.503 [2024-11-20 09:14:26.988669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.503 [2024-11-20 09:14:26.988890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.503 [2024-11-20 09:14:26.988899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.503 [2024-11-20 09:14:26.988907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.503 [2024-11-20 09:14:26.988923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.503 [2024-11-20 09:14:27.001693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.503 [2024-11-20 09:14:27.002281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.503 [2024-11-20 09:14:27.002307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.503 [2024-11-20 09:14:27.002316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.503 [2024-11-20 09:14:27.002538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.503 [2024-11-20 09:14:27.002758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.503 [2024-11-20 09:14:27.002768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.503 [2024-11-20 09:14:27.002776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.503 [2024-11-20 09:14:27.002784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.503 [2024-11-20 09:14:27.015560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.503 [2024-11-20 09:14:27.016271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.503 [2024-11-20 09:14:27.016334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.504 [2024-11-20 09:14:27.016347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.504 [2024-11-20 09:14:27.016603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.504 [2024-11-20 09:14:27.016830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.504 [2024-11-20 09:14:27.016838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.504 [2024-11-20 09:14:27.016847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.504 [2024-11-20 09:14:27.016856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.766 [2024-11-20 09:14:27.029442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.766 [2024-11-20 09:14:27.030183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.766 [2024-11-20 09:14:27.030245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.766 [2024-11-20 09:14:27.030258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.766 [2024-11-20 09:14:27.030514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.766 [2024-11-20 09:14:27.030741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.766 [2024-11-20 09:14:27.030750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.766 [2024-11-20 09:14:27.030758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.766 [2024-11-20 09:14:27.030767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.766 [2024-11-20 09:14:27.043344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.766 [2024-11-20 09:14:27.044081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.766 [2024-11-20 09:14:27.044143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.766 [2024-11-20 09:14:27.044156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.766 [2024-11-20 09:14:27.044426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.766 [2024-11-20 09:14:27.044653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.766 [2024-11-20 09:14:27.044663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.766 [2024-11-20 09:14:27.044672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.766 [2024-11-20 09:14:27.044681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.766 [2024-11-20 09:14:27.057246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.766 [2024-11-20 09:14:27.057971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.766 [2024-11-20 09:14:27.058032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.766 [2024-11-20 09:14:27.058044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.766 [2024-11-20 09:14:27.058313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.766 [2024-11-20 09:14:27.058541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.767 [2024-11-20 09:14:27.058550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.767 [2024-11-20 09:14:27.058558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.767 [2024-11-20 09:14:27.058568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.767 [2024-11-20 09:14:27.071150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.767 [2024-11-20 09:14:27.071747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.767 [2024-11-20 09:14:27.071807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.767 [2024-11-20 09:14:27.071820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.767 [2024-11-20 09:14:27.072074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.767 [2024-11-20 09:14:27.072315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.767 [2024-11-20 09:14:27.072326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.767 [2024-11-20 09:14:27.072334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.767 [2024-11-20 09:14:27.072343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.767 [2024-11-20 09:14:27.085124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.767 [2024-11-20 09:14:27.085863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.767 [2024-11-20 09:14:27.085926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.767 [2024-11-20 09:14:27.085946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.767 [2024-11-20 09:14:27.086214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.767 [2024-11-20 09:14:27.086442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.767 [2024-11-20 09:14:27.086452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.767 [2024-11-20 09:14:27.086460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.767 [2024-11-20 09:14:27.086469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.767 [2024-11-20 09:14:27.099038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.767 [2024-11-20 09:14:27.099774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.767 [2024-11-20 09:14:27.099838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.767 [2024-11-20 09:14:27.099850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.767 [2024-11-20 09:14:27.100105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.767 [2024-11-20 09:14:27.100345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.767 [2024-11-20 09:14:27.100357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.767 [2024-11-20 09:14:27.100366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.767 [2024-11-20 09:14:27.100375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.767 [2024-11-20 09:14:27.112952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.767 [2024-11-20 09:14:27.113648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.767 [2024-11-20 09:14:27.113710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.767 [2024-11-20 09:14:27.113722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.767 [2024-11-20 09:14:27.113977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.767 [2024-11-20 09:14:27.114216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.767 [2024-11-20 09:14:27.114228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.767 [2024-11-20 09:14:27.114239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.767 [2024-11-20 09:14:27.114248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.767 [2024-11-20 09:14:27.126852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.767 [2024-11-20 09:14:27.127460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.767 [2024-11-20 09:14:27.127491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.767 [2024-11-20 09:14:27.127501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.767 [2024-11-20 09:14:27.127724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.767 [2024-11-20 09:14:27.127954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.767 [2024-11-20 09:14:27.127964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.767 [2024-11-20 09:14:27.127972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.767 [2024-11-20 09:14:27.127980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.767 [2024-11-20 09:14:27.140760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.767 [2024-11-20 09:14:27.141444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.767 [2024-11-20 09:14:27.141508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.767 [2024-11-20 09:14:27.141521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.767 [2024-11-20 09:14:27.141776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.767 [2024-11-20 09:14:27.142002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.767 [2024-11-20 09:14:27.142012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.767 [2024-11-20 09:14:27.142020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.767 [2024-11-20 09:14:27.142029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.767 [2024-11-20 09:14:27.154619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.767 [2024-11-20 09:14:27.155421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.767 [2024-11-20 09:14:27.155484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.767 [2024-11-20 09:14:27.155496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.767 [2024-11-20 09:14:27.155751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.767 [2024-11-20 09:14:27.155978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.767 [2024-11-20 09:14:27.155987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.767 [2024-11-20 09:14:27.155995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.767 [2024-11-20 09:14:27.156004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.767 [2024-11-20 09:14:27.168589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.767 [2024-11-20 09:14:27.169059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.767 [2024-11-20 09:14:27.169090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.767 [2024-11-20 09:14:27.169100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.767 [2024-11-20 09:14:27.169347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.767 [2024-11-20 09:14:27.169572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.767 [2024-11-20 09:14:27.169581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.767 [2024-11-20 09:14:27.169589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.767 [2024-11-20 09:14:27.169604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.767 [2024-11-20 09:14:27.182594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.767 [2024-11-20 09:14:27.183249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.767 [2024-11-20 09:14:27.183294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.767 [2024-11-20 09:14:27.183305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.767 [2024-11-20 09:14:27.183544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.767 [2024-11-20 09:14:27.183769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.768 [2024-11-20 09:14:27.183779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.768 [2024-11-20 09:14:27.183786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.768 [2024-11-20 09:14:27.183797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.768 [2024-11-20 09:14:27.196600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.768 [2024-11-20 09:14:27.197279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.768 [2024-11-20 09:14:27.197343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.768 [2024-11-20 09:14:27.197359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.768 [2024-11-20 09:14:27.197614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.768 [2024-11-20 09:14:27.197841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.768 [2024-11-20 09:14:27.197852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.768 [2024-11-20 09:14:27.197861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.768 [2024-11-20 09:14:27.197871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.768 [2024-11-20 09:14:27.210552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.768 [2024-11-20 09:14:27.211133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.768 [2024-11-20 09:14:27.211169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.768 [2024-11-20 09:14:27.211179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.768 [2024-11-20 09:14:27.211401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.768 [2024-11-20 09:14:27.211622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.768 [2024-11-20 09:14:27.211632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.768 [2024-11-20 09:14:27.211640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.768 [2024-11-20 09:14:27.211648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.768 [2024-11-20 09:14:27.224443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.768 [2024-11-20 09:14:27.225070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.768 [2024-11-20 09:14:27.225095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.768 [2024-11-20 09:14:27.225103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.768 [2024-11-20 09:14:27.225335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.768 [2024-11-20 09:14:27.225557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.768 [2024-11-20 09:14:27.225569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.768 [2024-11-20 09:14:27.225576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.768 [2024-11-20 09:14:27.225584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.768 [2024-11-20 09:14:27.238357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.768 [2024-11-20 09:14:27.238931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.768 [2024-11-20 09:14:27.238955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.768 [2024-11-20 09:14:27.238963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.768 [2024-11-20 09:14:27.239192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.768 [2024-11-20 09:14:27.239414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.768 [2024-11-20 09:14:27.239426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.768 [2024-11-20 09:14:27.239434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.768 [2024-11-20 09:14:27.239442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.768 [2024-11-20 09:14:27.252231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.768 [2024-11-20 09:14:27.252722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.768 [2024-11-20 09:14:27.252745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.768 [2024-11-20 09:14:27.252753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.768 [2024-11-20 09:14:27.252974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.768 [2024-11-20 09:14:27.253200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.768 [2024-11-20 09:14:27.253211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.768 [2024-11-20 09:14:27.253219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.768 [2024-11-20 09:14:27.253227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.768 [2024-11-20 09:14:27.266216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.768 [2024-11-20 09:14:27.266810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.768 [2024-11-20 09:14:27.266832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.768 [2024-11-20 09:14:27.266846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.768 [2024-11-20 09:14:27.267067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.768 [2024-11-20 09:14:27.267296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.768 [2024-11-20 09:14:27.267307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.768 [2024-11-20 09:14:27.267315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.768 [2024-11-20 09:14:27.267323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:01.768 [2024-11-20 09:14:27.280132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:01.768 [2024-11-20 09:14:27.280842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.768 [2024-11-20 09:14:27.280906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:01.768 [2024-11-20 09:14:27.280918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:01.768 [2024-11-20 09:14:27.281186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:01.768 [2024-11-20 09:14:27.281413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:01.768 [2024-11-20 09:14:27.281423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:01.768 [2024-11-20 09:14:27.281431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:01.768 [2024-11-20 09:14:27.281440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.038 [2024-11-20 09:14:27.294018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.039 [2024-11-20 09:14:27.294618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.039 [2024-11-20 09:14:27.294648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.039 [2024-11-20 09:14:27.294657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.039 [2024-11-20 09:14:27.294880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.039 [2024-11-20 09:14:27.295102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.039 [2024-11-20 09:14:27.295113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.039 [2024-11-20 09:14:27.295121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.039 [2024-11-20 09:14:27.295128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.039 [2024-11-20 09:14:27.307911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.039 [2024-11-20 09:14:27.308489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.039 [2024-11-20 09:14:27.308514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.039 [2024-11-20 09:14:27.308523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.039 [2024-11-20 09:14:27.308744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.039 [2024-11-20 09:14:27.308973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.039 [2024-11-20 09:14:27.308983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.039 [2024-11-20 09:14:27.308990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.039 [2024-11-20 09:14:27.308997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.039 [2024-11-20 09:14:27.321793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.039 [2024-11-20 09:14:27.322281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.040 [2024-11-20 09:14:27.322307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.040 [2024-11-20 09:14:27.322315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.040 [2024-11-20 09:14:27.322537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.040 [2024-11-20 09:14:27.322758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.040 [2024-11-20 09:14:27.322767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.040 [2024-11-20 09:14:27.322775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.040 [2024-11-20 09:14:27.322783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.040 [2024-11-20 09:14:27.335756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.040 [2024-11-20 09:14:27.336336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.040 [2024-11-20 09:14:27.336359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.040 [2024-11-20 09:14:27.336368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.040 [2024-11-20 09:14:27.336588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.040 [2024-11-20 09:14:27.336808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.040 [2024-11-20 09:14:27.336817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.040 [2024-11-20 09:14:27.336826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.040 [2024-11-20 09:14:27.336833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.040 [2024-11-20 09:14:27.348519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.040 [2024-11-20 09:14:27.349057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.040 [2024-11-20 09:14:27.349077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.040 [2024-11-20 09:14:27.349083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.040 [2024-11-20 09:14:27.349242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.040 [2024-11-20 09:14:27.349396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.040 [2024-11-20 09:14:27.349403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.040 [2024-11-20 09:14:27.349408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.041 [2024-11-20 09:14:27.349420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.041 [2024-11-20 09:14:27.361264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.041 [2024-11-20 09:14:27.361764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.041 [2024-11-20 09:14:27.361783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.041 [2024-11-20 09:14:27.361790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.041 [2024-11-20 09:14:27.361943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.041 [2024-11-20 09:14:27.362095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.041 [2024-11-20 09:14:27.362102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.041 [2024-11-20 09:14:27.362108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.041 [2024-11-20 09:14:27.362114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.041 [2024-11-20 09:14:27.373891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.041 [2024-11-20 09:14:27.374477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.041 [2024-11-20 09:14:27.374525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.041 [2024-11-20 09:14:27.374535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.041 [2024-11-20 09:14:27.374713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.041 [2024-11-20 09:14:27.374869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.041 [2024-11-20 09:14:27.374876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.041 [2024-11-20 09:14:27.374883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.041 [2024-11-20 09:14:27.374889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.042 [2024-11-20 09:14:27.386522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.042 [2024-11-20 09:14:27.387002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.042 [2024-11-20 09:14:27.387046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.042 [2024-11-20 09:14:27.387055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.042 [2024-11-20 09:14:27.387240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.042 [2024-11-20 09:14:27.387397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.042 [2024-11-20 09:14:27.387404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.042 [2024-11-20 09:14:27.387410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.042 [2024-11-20 09:14:27.387416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.042 [2024-11-20 09:14:27.399182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.042 [2024-11-20 09:14:27.399684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.042 [2024-11-20 09:14:27.399703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.042 [2024-11-20 09:14:27.399709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.042 [2024-11-20 09:14:27.399862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.042 [2024-11-20 09:14:27.400013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.044 [2024-11-20 09:14:27.400019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.044 [2024-11-20 09:14:27.400025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.044 [2024-11-20 09:14:27.400030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.044 [2024-11-20 09:14:27.411924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.044 [2024-11-20 09:14:27.412573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.044 [2024-11-20 09:14:27.412613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.044 [2024-11-20 09:14:27.412622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.044 [2024-11-20 09:14:27.412795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.044 [2024-11-20 09:14:27.412950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.044 [2024-11-20 09:14:27.412957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.044 [2024-11-20 09:14:27.412962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.044 [2024-11-20 09:14:27.412968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.044 [2024-11-20 09:14:27.424647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.044 [2024-11-20 09:14:27.425201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.044 [2024-11-20 09:14:27.425221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.044 [2024-11-20 09:14:27.425227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.045 [2024-11-20 09:14:27.425379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.045 [2024-11-20 09:14:27.425531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.045 [2024-11-20 09:14:27.425538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.045 [2024-11-20 09:14:27.425543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.045 [2024-11-20 09:14:27.425548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.045 [2024-11-20 09:14:27.437296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.045 [2024-11-20 09:14:27.437749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.045 [2024-11-20 09:14:27.437764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.045 [2024-11-20 09:14:27.437774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.045 [2024-11-20 09:14:27.437926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.045 [2024-11-20 09:14:27.438077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.045 [2024-11-20 09:14:27.438083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.045 [2024-11-20 09:14:27.438088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.045 [2024-11-20 09:14:27.438094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.045 [2024-11-20 09:14:27.449981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.045 [2024-11-20 09:14:27.450632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.045 [2024-11-20 09:14:27.450669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.045 [2024-11-20 09:14:27.450677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.045 [2024-11-20 09:14:27.450847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.045 [2024-11-20 09:14:27.451002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.045 [2024-11-20 09:14:27.451008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.045 [2024-11-20 09:14:27.451013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.045 [2024-11-20 09:14:27.451020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.045 [2024-11-20 09:14:27.462731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.045 [2024-11-20 09:14:27.463234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.045 [2024-11-20 09:14:27.463253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.045 [2024-11-20 09:14:27.463259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.045 [2024-11-20 09:14:27.463410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.045 [2024-11-20 09:14:27.463561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.045 [2024-11-20 09:14:27.463567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.045 [2024-11-20 09:14:27.463573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.046 [2024-11-20 09:14:27.463578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.046 [2024-11-20 09:14:27.475479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.046 [2024-11-20 09:14:27.475973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.046 [2024-11-20 09:14:27.475987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.046 [2024-11-20 09:14:27.475993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.046 [2024-11-20 09:14:27.476144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.046 [2024-11-20 09:14:27.476299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.046 [2024-11-20 09:14:27.476310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.046 [2024-11-20 09:14:27.476315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.046 [2024-11-20 09:14:27.476320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.046 [2024-11-20 09:14:27.488204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.046 [2024-11-20 09:14:27.488681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.046 [2024-11-20 09:14:27.488694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.046 [2024-11-20 09:14:27.488699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.046 [2024-11-20 09:14:27.488849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.046 [2024-11-20 09:14:27.488999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.046 [2024-11-20 09:14:27.489006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.046 [2024-11-20 09:14:27.489011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.046 [2024-11-20 09:14:27.489015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.046 [2024-11-20 09:14:27.500899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.046 [2024-11-20 09:14:27.501350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.046 [2024-11-20 09:14:27.501363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.046 [2024-11-20 09:14:27.501368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.046 [2024-11-20 09:14:27.501518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.046 [2024-11-20 09:14:27.501669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.046 [2024-11-20 09:14:27.501674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.046 [2024-11-20 09:14:27.501679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.046 [2024-11-20 09:14:27.501684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.046 [2024-11-20 09:14:27.513565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.046 [2024-11-20 09:14:27.513943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.046 [2024-11-20 09:14:27.513955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.046 [2024-11-20 09:14:27.513961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.046 [2024-11-20 09:14:27.514111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.046 [2024-11-20 09:14:27.514266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.046 [2024-11-20 09:14:27.514272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.046 [2024-11-20 09:14:27.514278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.046 [2024-11-20 09:14:27.514288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.046 [2024-11-20 09:14:27.526178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.046 [2024-11-20 09:14:27.526666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.046 [2024-11-20 09:14:27.526678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.046 [2024-11-20 09:14:27.526684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.046 [2024-11-20 09:14:27.526834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.046 [2024-11-20 09:14:27.526984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.046 [2024-11-20 09:14:27.526990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.046 [2024-11-20 09:14:27.526995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.046 [2024-11-20 09:14:27.526999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.046 [2024-11-20 09:14:27.538883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.046 [2024-11-20 09:14:27.539459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.046 [2024-11-20 09:14:27.539489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.046 [2024-11-20 09:14:27.539499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.046 [2024-11-20 09:14:27.539666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.046 [2024-11-20 09:14:27.539820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.046 [2024-11-20 09:14:27.539826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.046 [2024-11-20 09:14:27.539831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.046 [2024-11-20 09:14:27.539837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.046 [2024-11-20 09:14:27.551588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.046 [2024-11-20 09:14:27.552176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.046 [2024-11-20 09:14:27.552206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.046 [2024-11-20 09:14:27.552215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.046 [2024-11-20 09:14:27.552383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.046 [2024-11-20 09:14:27.552536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.046 [2024-11-20 09:14:27.552542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.046 [2024-11-20 09:14:27.552548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.046 [2024-11-20 09:14:27.552553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.312 [2024-11-20 09:14:27.564303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.312 [2024-11-20 09:14:27.564671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.312 [2024-11-20 09:14:27.564686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.312 [2024-11-20 09:14:27.564692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.312 [2024-11-20 09:14:27.564843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.312 [2024-11-20 09:14:27.564994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.312 [2024-11-20 09:14:27.565000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.312 [2024-11-20 09:14:27.565005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.312 [2024-11-20 09:14:27.565010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.312 [2024-11-20 09:14:27.577048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.312 [2024-11-20 09:14:27.577622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.312 [2024-11-20 09:14:27.577652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.312 [2024-11-20 09:14:27.577661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.312 [2024-11-20 09:14:27.577827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.312 [2024-11-20 09:14:27.577981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.312 [2024-11-20 09:14:27.577987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.312 [2024-11-20 09:14:27.577992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.312 [2024-11-20 09:14:27.577998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.312 [2024-11-20 09:14:27.589746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.313 [2024-11-20 09:14:27.590243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.313 [2024-11-20 09:14:27.590273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.313 [2024-11-20 09:14:27.590282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.313 [2024-11-20 09:14:27.590451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.313 [2024-11-20 09:14:27.590604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.313 [2024-11-20 09:14:27.590610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.313 [2024-11-20 09:14:27.590616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.313 [2024-11-20 09:14:27.590621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.313 [2024-11-20 09:14:27.602373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.313 [2024-11-20 09:14:27.602716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.313 [2024-11-20 09:14:27.602732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.313 [2024-11-20 09:14:27.602741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.313 [2024-11-20 09:14:27.602893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.313 [2024-11-20 09:14:27.603044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.313 [2024-11-20 09:14:27.603049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.313 [2024-11-20 09:14:27.603055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.313 [2024-11-20 09:14:27.603060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.313 [2024-11-20 09:14:27.615088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.313 [2024-11-20 09:14:27.615751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.313 [2024-11-20 09:14:27.615781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.313 [2024-11-20 09:14:27.615790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.313 [2024-11-20 09:14:27.615956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.313 [2024-11-20 09:14:27.616109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.313 [2024-11-20 09:14:27.616115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.313 [2024-11-20 09:14:27.616121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.313 [2024-11-20 09:14:27.616127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.313 [2024-11-20 09:14:27.627758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.313 [2024-11-20 09:14:27.628231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.313 [2024-11-20 09:14:27.628246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.313 [2024-11-20 09:14:27.628252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.313 [2024-11-20 09:14:27.628403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.313 [2024-11-20 09:14:27.628553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.313 [2024-11-20 09:14:27.628559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.313 [2024-11-20 09:14:27.628565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.313 [2024-11-20 09:14:27.628569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.313 [2024-11-20 09:14:27.640459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.313 [2024-11-20 09:14:27.641044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.313 [2024-11-20 09:14:27.641075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.313 [2024-11-20 09:14:27.641084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.313 [2024-11-20 09:14:27.641258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.313 [2024-11-20 09:14:27.641412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.313 [2024-11-20 09:14:27.641422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.313 [2024-11-20 09:14:27.641427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.313 [2024-11-20 09:14:27.641433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.313 [2024-11-20 09:14:27.653183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.313 [2024-11-20 09:14:27.653707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.313 [2024-11-20 09:14:27.653738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.313 [2024-11-20 09:14:27.653746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.313 [2024-11-20 09:14:27.653912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.313 [2024-11-20 09:14:27.654066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.313 [2024-11-20 09:14:27.654072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.313 [2024-11-20 09:14:27.654078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.313 [2024-11-20 09:14:27.654084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.313 [2024-11-20 09:14:27.665831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.313 [2024-11-20 09:14:27.666409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.313 [2024-11-20 09:14:27.666439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.313 [2024-11-20 09:14:27.666447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.313 [2024-11-20 09:14:27.666613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.313 [2024-11-20 09:14:27.666767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.313 [2024-11-20 09:14:27.666773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.313 [2024-11-20 09:14:27.666779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.313 [2024-11-20 09:14:27.666784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.313 [2024-11-20 09:14:27.678504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.313 [2024-11-20 09:14:27.679082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.313 [2024-11-20 09:14:27.679112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.313 [2024-11-20 09:14:27.679120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.313 [2024-11-20 09:14:27.679296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.313 [2024-11-20 09:14:27.679450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.313 [2024-11-20 09:14:27.679457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.313 [2024-11-20 09:14:27.679462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.313 [2024-11-20 09:14:27.679471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.313 [2024-11-20 09:14:27.691221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.313 [2024-11-20 09:14:27.691816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.313 [2024-11-20 09:14:27.691846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.313 [2024-11-20 09:14:27.691856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.313 [2024-11-20 09:14:27.692023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.313 [2024-11-20 09:14:27.692182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.313 [2024-11-20 09:14:27.692189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.313 [2024-11-20 09:14:27.692195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.313 [2024-11-20 09:14:27.692200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.313 [2024-11-20 09:14:27.703943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.313 [2024-11-20 09:14:27.704560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.313 [2024-11-20 09:14:27.704591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.313 [2024-11-20 09:14:27.704599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.313 [2024-11-20 09:14:27.704766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.313 [2024-11-20 09:14:27.704919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.314 [2024-11-20 09:14:27.704925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.314 [2024-11-20 09:14:27.704931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.314 [2024-11-20 09:14:27.704937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.314 [2024-11-20 09:14:27.716687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.314 [2024-11-20 09:14:27.717148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.314 [2024-11-20 09:14:27.717166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.314 [2024-11-20 09:14:27.717172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.314 [2024-11-20 09:14:27.717330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.314 [2024-11-20 09:14:27.717481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.314 [2024-11-20 09:14:27.717487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.314 [2024-11-20 09:14:27.717492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.314 [2024-11-20 09:14:27.717497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.314 [2024-11-20 09:14:27.729381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.314 [2024-11-20 09:14:27.729870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.314 [2024-11-20 09:14:27.729883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.314 [2024-11-20 09:14:27.729888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.314 [2024-11-20 09:14:27.730038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.314 [2024-11-20 09:14:27.730193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.314 [2024-11-20 09:14:27.730200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.314 [2024-11-20 09:14:27.730205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.314 [2024-11-20 09:14:27.730209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.314 [2024-11-20 09:14:27.742089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.314 [2024-11-20 09:14:27.742635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.314 [2024-11-20 09:14:27.742665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.314 [2024-11-20 09:14:27.742674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.314 [2024-11-20 09:14:27.742840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.314 [2024-11-20 09:14:27.742993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.314 [2024-11-20 09:14:27.742999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.314 [2024-11-20 09:14:27.743004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.314 [2024-11-20 09:14:27.743010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.314 [2024-11-20 09:14:27.754762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.314 [2024-11-20 09:14:27.755275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.314 [2024-11-20 09:14:27.755306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.314 [2024-11-20 09:14:27.755314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.314 [2024-11-20 09:14:27.755483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.314 [2024-11-20 09:14:27.755636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.314 [2024-11-20 09:14:27.755642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.314 [2024-11-20 09:14:27.755648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.314 [2024-11-20 09:14:27.755653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.314 [2024-11-20 09:14:27.767406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.314 [2024-11-20 09:14:27.767978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.314 [2024-11-20 09:14:27.768008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.314 [2024-11-20 09:14:27.768021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.314 [2024-11-20 09:14:27.768196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.314 [2024-11-20 09:14:27.768350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.314 [2024-11-20 09:14:27.768356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.314 [2024-11-20 09:14:27.768361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.314 [2024-11-20 09:14:27.768367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.314 [2024-11-20 09:14:27.780121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.314 [2024-11-20 09:14:27.780571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.314 [2024-11-20 09:14:27.780586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.314 [2024-11-20 09:14:27.780591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.314 [2024-11-20 09:14:27.780742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.314 [2024-11-20 09:14:27.780893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.314 [2024-11-20 09:14:27.780899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.314 [2024-11-20 09:14:27.780904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.314 [2024-11-20 09:14:27.780908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.314 [2024-11-20 09:14:27.792794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.314 [2024-11-20 09:14:27.793222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.314 [2024-11-20 09:14:27.793235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.314 [2024-11-20 09:14:27.793240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.314 [2024-11-20 09:14:27.793391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.314 [2024-11-20 09:14:27.793541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.314 [2024-11-20 09:14:27.793546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.314 [2024-11-20 09:14:27.793551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.314 [2024-11-20 09:14:27.793556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.314 [2024-11-20 09:14:27.805435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.314 [2024-11-20 09:14:27.806004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.314 [2024-11-20 09:14:27.806035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.314 [2024-11-20 09:14:27.806043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.314 [2024-11-20 09:14:27.806215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.314 [2024-11-20 09:14:27.806369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.314 [2024-11-20 09:14:27.806379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.314 [2024-11-20 09:14:27.806384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.314 [2024-11-20 09:14:27.806390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.314 [2024-11-20 09:14:27.818140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.314 [2024-11-20 09:14:27.818630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.314 [2024-11-20 09:14:27.818645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.314 [2024-11-20 09:14:27.818650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.314 [2024-11-20 09:14:27.818801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.314 [2024-11-20 09:14:27.818952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.314 [2024-11-20 09:14:27.818958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.314 [2024-11-20 09:14:27.818963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.314 [2024-11-20 09:14:27.818967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.314 [2024-11-20 09:14:27.830847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.314 [2024-11-20 09:14:27.831298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.314 [2024-11-20 09:14:27.831329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.314 [2024-11-20 09:14:27.831338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.315 [2024-11-20 09:14:27.831506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.315 [2024-11-20 09:14:27.831660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.315 [2024-11-20 09:14:27.831666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.315 [2024-11-20 09:14:27.831672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.315 [2024-11-20 09:14:27.831678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.576 [2024-11-20 09:14:27.843572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.576 [2024-11-20 09:14:27.844683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.576 [2024-11-20 09:14:27.844701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.576 [2024-11-20 09:14:27.844708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.576 [2024-11-20 09:14:27.844865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.576 [2024-11-20 09:14:27.845017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.576 [2024-11-20 09:14:27.845023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.576 [2024-11-20 09:14:27.845028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.576 [2024-11-20 09:14:27.845037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.576 [2024-11-20 09:14:27.856216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.576 [2024-11-20 09:14:27.856705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.576 [2024-11-20 09:14:27.856718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.576 [2024-11-20 09:14:27.856724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.576 [2024-11-20 09:14:27.856875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.576 [2024-11-20 09:14:27.857025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.576 [2024-11-20 09:14:27.857031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.576 [2024-11-20 09:14:27.857036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.576 [2024-11-20 09:14:27.857041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.576 [2024-11-20 09:14:27.868941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.576 [2024-11-20 09:14:27.869496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.576 [2024-11-20 09:14:27.869526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.576 [2024-11-20 09:14:27.869534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.576 [2024-11-20 09:14:27.869700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.576 [2024-11-20 09:14:27.869854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.576 [2024-11-20 09:14:27.869860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.576 [2024-11-20 09:14:27.869866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.576 [2024-11-20 09:14:27.869872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.576 [2024-11-20 09:14:27.881628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.576 [2024-11-20 09:14:27.882208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.576 [2024-11-20 09:14:27.882238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.576 [2024-11-20 09:14:27.882247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.576 [2024-11-20 09:14:27.882416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.576 [2024-11-20 09:14:27.882569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.576 [2024-11-20 09:14:27.882575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.576 [2024-11-20 09:14:27.882581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.576 [2024-11-20 09:14:27.882587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.576 [2024-11-20 09:14:27.894333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.576 [2024-11-20 09:14:27.894906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.576 [2024-11-20 09:14:27.894936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.576 [2024-11-20 09:14:27.894945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.576 [2024-11-20 09:14:27.895111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.577 [2024-11-20 09:14:27.895273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.577 [2024-11-20 09:14:27.895280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.577 [2024-11-20 09:14:27.895286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.577 [2024-11-20 09:14:27.895291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.577 [2024-11-20 09:14:27.907032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.577 [2024-11-20 09:14:27.907579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.577 [2024-11-20 09:14:27.907609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.577 [2024-11-20 09:14:27.907618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.577 [2024-11-20 09:14:27.907784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.577 [2024-11-20 09:14:27.907942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.577 [2024-11-20 09:14:27.907949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.577 [2024-11-20 09:14:27.907954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.577 [2024-11-20 09:14:27.907960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.577 7018.75 IOPS, 27.42 MiB/s [2024-11-20T08:14:28.106Z] [2024-11-20 09:14:27.919714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.577 [2024-11-20 09:14:27.920259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.577 [2024-11-20 09:14:27.920289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.577 [2024-11-20 09:14:27.920298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.577 [2024-11-20 09:14:27.920466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.577 [2024-11-20 09:14:27.920620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.577 [2024-11-20 09:14:27.920626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.577 [2024-11-20 09:14:27.920631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.577 [2024-11-20 09:14:27.920637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.577 [2024-11-20 09:14:27.932414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.577 [2024-11-20 09:14:27.932992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.577 [2024-11-20 09:14:27.933023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.577 [2024-11-20 09:14:27.933034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.577 [2024-11-20 09:14:27.933208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.577 [2024-11-20 09:14:27.933363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.577 [2024-11-20 09:14:27.933369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.577 [2024-11-20 09:14:27.933375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.577 [2024-11-20 09:14:27.933380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.577 [2024-11-20 09:14:27.945114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.577 [2024-11-20 09:14:27.945616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.577 [2024-11-20 09:14:27.945647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.577 [2024-11-20 09:14:27.945655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.577 [2024-11-20 09:14:27.945822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.577 [2024-11-20 09:14:27.945975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.577 [2024-11-20 09:14:27.945981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.577 [2024-11-20 09:14:27.945987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.577 [2024-11-20 09:14:27.945992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.577 [2024-11-20 09:14:27.957742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.577 [2024-11-20 09:14:27.958272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.577 [2024-11-20 09:14:27.958303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.577 [2024-11-20 09:14:27.958312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.577 [2024-11-20 09:14:27.958480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.577 [2024-11-20 09:14:27.958634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.577 [2024-11-20 09:14:27.958640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.577 [2024-11-20 09:14:27.958646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.577 [2024-11-20 09:14:27.958651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.577 [2024-11-20 09:14:27.970397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.577 [2024-11-20 09:14:27.970886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.577 [2024-11-20 09:14:27.970900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.577 [2024-11-20 09:14:27.970905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.577 [2024-11-20 09:14:27.971056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.577 [2024-11-20 09:14:27.971217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.577 [2024-11-20 09:14:27.971223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.577 [2024-11-20 09:14:27.971228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.577 [2024-11-20 09:14:27.971233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.577 [2024-11-20 09:14:27.983112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.577 [2024-11-20 09:14:27.983565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.577 [2024-11-20 09:14:27.983578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.577 [2024-11-20 09:14:27.983584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.577 [2024-11-20 09:14:27.983734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.577 [2024-11-20 09:14:27.983884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.577 [2024-11-20 09:14:27.983890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.577 [2024-11-20 09:14:27.983895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.577 [2024-11-20 09:14:27.983899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.577 [2024-11-20 09:14:27.995805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.577 [2024-11-20 09:14:27.996379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.577 [2024-11-20 09:14:27.996409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.577 [2024-11-20 09:14:27.996418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.577 [2024-11-20 09:14:27.996584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.577 [2024-11-20 09:14:27.996737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.577 [2024-11-20 09:14:27.996743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.577 [2024-11-20 09:14:27.996749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.577 [2024-11-20 09:14:27.996754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.577 [2024-11-20 09:14:28.008492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.577 [2024-11-20 09:14:28.009067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.577 [2024-11-20 09:14:28.009097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.577 [2024-11-20 09:14:28.009106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.577 [2024-11-20 09:14:28.009279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.577 [2024-11-20 09:14:28.009433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.577 [2024-11-20 09:14:28.009439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.577 [2024-11-20 09:14:28.009448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.577 [2024-11-20 09:14:28.009454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.577 [2024-11-20 09:14:28.021196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.578 [2024-11-20 09:14:28.021764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.578 [2024-11-20 09:14:28.021794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.578 [2024-11-20 09:14:28.021803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.578 [2024-11-20 09:14:28.021969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.578 [2024-11-20 09:14:28.022122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.578 [2024-11-20 09:14:28.022128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.578 [2024-11-20 09:14:28.022134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.578 [2024-11-20 09:14:28.022139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.578 [2024-11-20 09:14:28.033882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.578 [2024-11-20 09:14:28.034477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.578 [2024-11-20 09:14:28.034508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.578 [2024-11-20 09:14:28.034516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.578 [2024-11-20 09:14:28.034683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.578 [2024-11-20 09:14:28.034836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.578 [2024-11-20 09:14:28.034842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.578 [2024-11-20 09:14:28.034848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.578 [2024-11-20 09:14:28.034853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.578 [2024-11-20 09:14:28.046596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.578 [2024-11-20 09:14:28.047170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.578 [2024-11-20 09:14:28.047200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.578 [2024-11-20 09:14:28.047208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.578 [2024-11-20 09:14:28.047377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.578 [2024-11-20 09:14:28.047530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.578 [2024-11-20 09:14:28.047537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.578 [2024-11-20 09:14:28.047542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.578 [2024-11-20 09:14:28.047548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.578 [2024-11-20 09:14:28.059288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.578 [2024-11-20 09:14:28.059892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.578 [2024-11-20 09:14:28.059922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.578 [2024-11-20 09:14:28.059930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.578 [2024-11-20 09:14:28.060097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.578 [2024-11-20 09:14:28.060259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.578 [2024-11-20 09:14:28.060267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.578 [2024-11-20 09:14:28.060272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.578 [2024-11-20 09:14:28.060278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.578 [2024-11-20 09:14:28.072009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.578 [2024-11-20 09:14:28.072530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.578 [2024-11-20 09:14:28.072560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.578 [2024-11-20 09:14:28.072569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.578 [2024-11-20 09:14:28.072735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.578 [2024-11-20 09:14:28.072888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.578 [2024-11-20 09:14:28.072894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.578 [2024-11-20 09:14:28.072900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.578 [2024-11-20 09:14:28.072905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.578 [2024-11-20 09:14:28.084657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.578 [2024-11-20 09:14:28.085235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.578 [2024-11-20 09:14:28.085265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.578 [2024-11-20 09:14:28.085274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.578 [2024-11-20 09:14:28.085440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.578 [2024-11-20 09:14:28.085594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.578 [2024-11-20 09:14:28.085600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.578 [2024-11-20 09:14:28.085605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.578 [2024-11-20 09:14:28.085611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.578 [2024-11-20 09:14:28.097356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.578 [2024-11-20 09:14:28.097935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.578 [2024-11-20 09:14:28.097965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.578 [2024-11-20 09:14:28.097977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.578 [2024-11-20 09:14:28.098143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.578 [2024-11-20 09:14:28.098304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.578 [2024-11-20 09:14:28.098312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.578 [2024-11-20 09:14:28.098317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.578 [2024-11-20 09:14:28.098323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.839 [2024-11-20 09:14:28.110057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.839 [2024-11-20 09:14:28.110514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.839 [2024-11-20 09:14:28.110544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.839 [2024-11-20 09:14:28.110553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.839 [2024-11-20 09:14:28.110719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.839 [2024-11-20 09:14:28.110873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.839 [2024-11-20 09:14:28.110879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.839 [2024-11-20 09:14:28.110885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.839 [2024-11-20 09:14:28.110890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.839 [2024-11-20 09:14:28.122784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.839 [2024-11-20 09:14:28.123267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.839 [2024-11-20 09:14:28.123297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.839 [2024-11-20 09:14:28.123305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.839 [2024-11-20 09:14:28.123474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.839 [2024-11-20 09:14:28.123627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.839 [2024-11-20 09:14:28.123633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.839 [2024-11-20 09:14:28.123638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.839 [2024-11-20 09:14:28.123644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.839 [2024-11-20 09:14:28.135532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.839 [2024-11-20 09:14:28.136101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.839 [2024-11-20 09:14:28.136131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.839 [2024-11-20 09:14:28.136140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.839 [2024-11-20 09:14:28.136316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.839 [2024-11-20 09:14:28.136475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.839 [2024-11-20 09:14:28.136481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.839 [2024-11-20 09:14:28.136487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.839 [2024-11-20 09:14:28.136493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.839 [2024-11-20 09:14:28.148230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.839 [2024-11-20 09:14:28.148731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.840 [2024-11-20 09:14:28.148761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.840 [2024-11-20 09:14:28.148769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.840 [2024-11-20 09:14:28.148935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.840 [2024-11-20 09:14:28.149089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.840 [2024-11-20 09:14:28.149095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.840 [2024-11-20 09:14:28.149100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.840 [2024-11-20 09:14:28.149106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.840 [2024-11-20 09:14:28.160846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.840 [2024-11-20 09:14:28.161475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.840 [2024-11-20 09:14:28.161506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.840 [2024-11-20 09:14:28.161514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.840 [2024-11-20 09:14:28.161680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.840 [2024-11-20 09:14:28.161834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.840 [2024-11-20 09:14:28.161840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.840 [2024-11-20 09:14:28.161845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.840 [2024-11-20 09:14:28.161851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.840 [2024-11-20 09:14:28.173599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.840 [2024-11-20 09:14:28.174151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.840 [2024-11-20 09:14:28.174186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.840 [2024-11-20 09:14:28.174194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.840 [2024-11-20 09:14:28.174360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.840 [2024-11-20 09:14:28.174513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.840 [2024-11-20 09:14:28.174519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.840 [2024-11-20 09:14:28.174525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.840 [2024-11-20 09:14:28.174534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.840 [2024-11-20 09:14:28.186274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.840 [2024-11-20 09:14:28.186813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.840 [2024-11-20 09:14:28.186844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.840 [2024-11-20 09:14:28.186853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.840 [2024-11-20 09:14:28.187019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.840 [2024-11-20 09:14:28.187181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.840 [2024-11-20 09:14:28.187189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.840 [2024-11-20 09:14:28.187194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.840 [2024-11-20 09:14:28.187200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.840 [2024-11-20 09:14:28.198934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.840 [2024-11-20 09:14:28.199554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.840 [2024-11-20 09:14:28.199584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.840 [2024-11-20 09:14:28.199593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.840 [2024-11-20 09:14:28.199759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.840 [2024-11-20 09:14:28.199913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.840 [2024-11-20 09:14:28.199919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.840 [2024-11-20 09:14:28.199924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.840 [2024-11-20 09:14:28.199930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.840 [2024-11-20 09:14:28.211675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.840 [2024-11-20 09:14:28.212168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.840 [2024-11-20 09:14:28.212184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.840 [2024-11-20 09:14:28.212190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.840 [2024-11-20 09:14:28.212340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.840 [2024-11-20 09:14:28.212491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.840 [2024-11-20 09:14:28.212496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.840 [2024-11-20 09:14:28.212501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.840 [2024-11-20 09:14:28.212506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.840 [2024-11-20 09:14:28.224384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.840 [2024-11-20 09:14:28.224955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.840 [2024-11-20 09:14:28.224986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.840 [2024-11-20 09:14:28.224994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.840 [2024-11-20 09:14:28.225168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.840 [2024-11-20 09:14:28.225322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.840 [2024-11-20 09:14:28.225328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.840 [2024-11-20 09:14:28.225334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.840 [2024-11-20 09:14:28.225339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.840 [2024-11-20 09:14:28.237071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.840 [2024-11-20 09:14:28.237631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.840 [2024-11-20 09:14:28.237661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.840 [2024-11-20 09:14:28.237669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.840 [2024-11-20 09:14:28.237836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.841 [2024-11-20 09:14:28.237989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.841 [2024-11-20 09:14:28.237995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.841 [2024-11-20 09:14:28.238001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.841 [2024-11-20 09:14:28.238006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.841 [2024-11-20 09:14:28.249745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.841 [2024-11-20 09:14:28.250245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.841 [2024-11-20 09:14:28.250275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.841 [2024-11-20 09:14:28.250284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.841 [2024-11-20 09:14:28.250453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.841 [2024-11-20 09:14:28.250606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.841 [2024-11-20 09:14:28.250612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.841 [2024-11-20 09:14:28.250617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.841 [2024-11-20 09:14:28.250623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.841 [2024-11-20 09:14:28.262364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.841 [2024-11-20 09:14:28.262936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.841 [2024-11-20 09:14:28.262966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.841 [2024-11-20 09:14:28.262979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.841 [2024-11-20 09:14:28.263145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.841 [2024-11-20 09:14:28.263306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.841 [2024-11-20 09:14:28.263313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.841 [2024-11-20 09:14:28.263318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.841 [2024-11-20 09:14:28.263324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.841 [2024-11-20 09:14:28.275063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.841 [2024-11-20 09:14:28.275629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.841 [2024-11-20 09:14:28.275660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.841 [2024-11-20 09:14:28.275668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.841 [2024-11-20 09:14:28.275835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.841 [2024-11-20 09:14:28.275988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.841 [2024-11-20 09:14:28.275994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.841 [2024-11-20 09:14:28.275999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.841 [2024-11-20 09:14:28.276005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.841 [2024-11-20 09:14:28.287746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.841 [2024-11-20 09:14:28.288239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.841 [2024-11-20 09:14:28.288269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.841 [2024-11-20 09:14:28.288278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.841 [2024-11-20 09:14:28.288447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.841 [2024-11-20 09:14:28.288600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.841 [2024-11-20 09:14:28.288606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.841 [2024-11-20 09:14:28.288612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.841 [2024-11-20 09:14:28.288617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.841 [2024-11-20 09:14:28.300357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.841 [2024-11-20 09:14:28.300932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.841 [2024-11-20 09:14:28.300962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.841 [2024-11-20 09:14:28.300971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.841 [2024-11-20 09:14:28.301137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.841 [2024-11-20 09:14:28.301301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.841 [2024-11-20 09:14:28.301308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.841 [2024-11-20 09:14:28.301314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.841 [2024-11-20 09:14:28.301320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.841 [2024-11-20 09:14:28.313055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.841 [2024-11-20 09:14:28.313544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.841 [2024-11-20 09:14:28.313559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.841 [2024-11-20 09:14:28.313565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.841 [2024-11-20 09:14:28.313715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.841 [2024-11-20 09:14:28.313866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.841 [2024-11-20 09:14:28.313871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.841 [2024-11-20 09:14:28.313876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.841 [2024-11-20 09:14:28.313881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.842 [2024-11-20 09:14:28.325764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.842 [2024-11-20 09:14:28.326208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.842 [2024-11-20 09:14:28.326221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.842 [2024-11-20 09:14:28.326227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.842 [2024-11-20 09:14:28.326377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.842 [2024-11-20 09:14:28.326527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.842 [2024-11-20 09:14:28.326533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.842 [2024-11-20 09:14:28.326538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.842 [2024-11-20 09:14:28.326543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.842 [2024-11-20 09:14:28.338415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.842 [2024-11-20 09:14:28.338944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.842 [2024-11-20 09:14:28.338974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.842 [2024-11-20 09:14:28.338983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.842 [2024-11-20 09:14:28.339149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.842 [2024-11-20 09:14:28.339309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.842 [2024-11-20 09:14:28.339316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.842 [2024-11-20 09:14:28.339322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.842 [2024-11-20 09:14:28.339331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.842 [2024-11-20 09:14:28.351061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.842 [2024-11-20 09:14:28.351648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.842 [2024-11-20 09:14:28.351678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:02.842 [2024-11-20 09:14:28.351687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:02.842 [2024-11-20 09:14:28.351853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:02.842 [2024-11-20 09:14:28.352006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.842 [2024-11-20 09:14:28.352012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.842 [2024-11-20 09:14:28.352018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.842 [2024-11-20 09:14:28.352023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.842 [2024-11-20 09:14:28.363768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.106 [2024-11-20 09:14:28.364253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.106 [2024-11-20 09:14:28.364284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.106 [2024-11-20 09:14:28.364293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.106 [2024-11-20 09:14:28.364459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.106 [2024-11-20 09:14:28.364613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.106 [2024-11-20 09:14:28.364619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.106 [2024-11-20 09:14:28.364625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.106 [2024-11-20 09:14:28.364632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.106 [2024-11-20 09:14:28.376388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.106 [2024-11-20 09:14:28.376968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.106 [2024-11-20 09:14:28.376998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.106 [2024-11-20 09:14:28.377007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.106 [2024-11-20 09:14:28.377179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.106 [2024-11-20 09:14:28.377333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.106 [2024-11-20 09:14:28.377339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.106 [2024-11-20 09:14:28.377345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.106 [2024-11-20 09:14:28.377351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.106 [2024-11-20 09:14:28.389094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.106 [2024-11-20 09:14:28.389656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.106 [2024-11-20 09:14:28.389687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.106 [2024-11-20 09:14:28.389696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.106 [2024-11-20 09:14:28.389861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.106 [2024-11-20 09:14:28.390016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.106 [2024-11-20 09:14:28.390022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.106 [2024-11-20 09:14:28.390027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.106 [2024-11-20 09:14:28.390033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.106 [2024-11-20 09:14:28.401781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.106 [2024-11-20 09:14:28.402289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.106 [2024-11-20 09:14:28.402319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.106 [2024-11-20 09:14:28.402328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.106 [2024-11-20 09:14:28.402496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.106 [2024-11-20 09:14:28.402650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.106 [2024-11-20 09:14:28.402656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.106 [2024-11-20 09:14:28.402661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.106 [2024-11-20 09:14:28.402667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.106 [2024-11-20 09:14:28.414418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.106 [2024-11-20 09:14:28.414987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.106 [2024-11-20 09:14:28.415017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.106 [2024-11-20 09:14:28.415026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.106 [2024-11-20 09:14:28.415198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.106 [2024-11-20 09:14:28.415352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.106 [2024-11-20 09:14:28.415358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.106 [2024-11-20 09:14:28.415364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.106 [2024-11-20 09:14:28.415369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.106 [2024-11-20 09:14:28.427125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.106 [2024-11-20 09:14:28.427701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.106 [2024-11-20 09:14:28.427732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.106 [2024-11-20 09:14:28.427748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.106 [2024-11-20 09:14:28.427914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.106 [2024-11-20 09:14:28.428069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.106 [2024-11-20 09:14:28.428077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.106 [2024-11-20 09:14:28.428083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.106 [2024-11-20 09:14:28.428090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.106 [2024-11-20 09:14:28.439839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.106 [2024-11-20 09:14:28.440339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.106 [2024-11-20 09:14:28.440355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.107 [2024-11-20 09:14:28.440361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.107 [2024-11-20 09:14:28.440512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.107 [2024-11-20 09:14:28.440662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.107 [2024-11-20 09:14:28.440668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.107 [2024-11-20 09:14:28.440673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.107 [2024-11-20 09:14:28.440678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.107 [2024-11-20 09:14:28.452555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.107 [2024-11-20 09:14:28.452990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.107 [2024-11-20 09:14:28.453002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.107 [2024-11-20 09:14:28.453008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.107 [2024-11-20 09:14:28.453163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.107 [2024-11-20 09:14:28.453315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.107 [2024-11-20 09:14:28.453321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.107 [2024-11-20 09:14:28.453326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.107 [2024-11-20 09:14:28.453331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.107 [2024-11-20 09:14:28.465212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.107 [2024-11-20 09:14:28.465680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.107 [2024-11-20 09:14:28.465692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.107 [2024-11-20 09:14:28.465698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.107 [2024-11-20 09:14:28.465848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.107 [2024-11-20 09:14:28.466001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.107 [2024-11-20 09:14:28.466007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.107 [2024-11-20 09:14:28.466013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.107 [2024-11-20 09:14:28.466018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.107 [2024-11-20 09:14:28.477907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.107 [2024-11-20 09:14:28.478489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.107 [2024-11-20 09:14:28.478519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.107 [2024-11-20 09:14:28.478528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.107 [2024-11-20 09:14:28.478694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.107 [2024-11-20 09:14:28.478847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.107 [2024-11-20 09:14:28.478853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.107 [2024-11-20 09:14:28.478859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.107 [2024-11-20 09:14:28.478864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.107 [2024-11-20 09:14:28.490609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.107 [2024-11-20 09:14:28.491104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.107 [2024-11-20 09:14:28.491119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.107 [2024-11-20 09:14:28.491125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.107 [2024-11-20 09:14:28.491345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.107 [2024-11-20 09:14:28.491497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.107 [2024-11-20 09:14:28.491503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.107 [2024-11-20 09:14:28.491508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.107 [2024-11-20 09:14:28.491513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.107 [2024-11-20 09:14:28.503241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.107 [2024-11-20 09:14:28.503806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.107 [2024-11-20 09:14:28.503836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.107 [2024-11-20 09:14:28.503844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.107 [2024-11-20 09:14:28.504011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.107 [2024-11-20 09:14:28.504171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.107 [2024-11-20 09:14:28.504178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.107 [2024-11-20 09:14:28.504183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.107 [2024-11-20 09:14:28.504193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.107 [2024-11-20 09:14:28.515933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.107 [2024-11-20 09:14:28.516532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.107 [2024-11-20 09:14:28.516563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.107 [2024-11-20 09:14:28.516571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.107 [2024-11-20 09:14:28.516737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.107 [2024-11-20 09:14:28.516891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.107 [2024-11-20 09:14:28.516897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.107 [2024-11-20 09:14:28.516902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.107 [2024-11-20 09:14:28.516908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.107 [2024-11-20 09:14:28.528657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.107 [2024-11-20 09:14:28.529135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.107 [2024-11-20 09:14:28.529150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.107 [2024-11-20 09:14:28.529155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.107 [2024-11-20 09:14:28.529312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.107 [2024-11-20 09:14:28.529462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.107 [2024-11-20 09:14:28.529468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.107 [2024-11-20 09:14:28.529473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.107 [2024-11-20 09:14:28.529478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.107 [2024-11-20 09:14:28.541345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.107 [2024-11-20 09:14:28.541840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.107 [2024-11-20 09:14:28.541852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.107 [2024-11-20 09:14:28.541857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.107 [2024-11-20 09:14:28.542007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.107 [2024-11-20 09:14:28.542162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.107 [2024-11-20 09:14:28.542168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.107 [2024-11-20 09:14:28.542173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.107 [2024-11-20 09:14:28.542178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.108 [2024-11-20 09:14:28.554047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.108 [2024-11-20 09:14:28.554582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.108 [2024-11-20 09:14:28.554595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.108 [2024-11-20 09:14:28.554600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.108 [2024-11-20 09:14:28.554750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.108 [2024-11-20 09:14:28.554900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.108 [2024-11-20 09:14:28.554906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.108 [2024-11-20 09:14:28.554911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.108 [2024-11-20 09:14:28.554915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.108 [2024-11-20 09:14:28.566789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.108 [2024-11-20 09:14:28.567241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.108 [2024-11-20 09:14:28.567253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.108 [2024-11-20 09:14:28.567258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.108 [2024-11-20 09:14:28.567409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.108 [2024-11-20 09:14:28.567559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.108 [2024-11-20 09:14:28.567564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.108 [2024-11-20 09:14:28.567569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.108 [2024-11-20 09:14:28.567574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.108 [2024-11-20 09:14:28.579472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.108 [2024-11-20 09:14:28.580020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.108 [2024-11-20 09:14:28.580050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.108 [2024-11-20 09:14:28.580059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.108 [2024-11-20 09:14:28.580232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.108 [2024-11-20 09:14:28.580387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.108 [2024-11-20 09:14:28.580393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.108 [2024-11-20 09:14:28.580398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.108 [2024-11-20 09:14:28.580404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.108 [2024-11-20 09:14:28.592136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.108 [2024-11-20 09:14:28.592751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.108 [2024-11-20 09:14:28.592781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.108 [2024-11-20 09:14:28.592793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.108 [2024-11-20 09:14:28.592959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.108 [2024-11-20 09:14:28.593112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.108 [2024-11-20 09:14:28.593118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.108 [2024-11-20 09:14:28.593124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.108 [2024-11-20 09:14:28.593130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.108 [2024-11-20 09:14:28.604871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.108 [2024-11-20 09:14:28.605438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.108 [2024-11-20 09:14:28.605467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.108 [2024-11-20 09:14:28.605476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.108 [2024-11-20 09:14:28.605642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.108 [2024-11-20 09:14:28.605795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.108 [2024-11-20 09:14:28.605801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.108 [2024-11-20 09:14:28.605807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.108 [2024-11-20 09:14:28.605813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.108 [2024-11-20 09:14:28.617553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.108 [2024-11-20 09:14:28.618126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.108 [2024-11-20 09:14:28.618156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.108 [2024-11-20 09:14:28.618172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.108 [2024-11-20 09:14:28.618338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.108 [2024-11-20 09:14:28.618492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.108 [2024-11-20 09:14:28.618498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.108 [2024-11-20 09:14:28.618503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.108 [2024-11-20 09:14:28.618508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.371 [2024-11-20 09:14:28.630269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.371 [2024-11-20 09:14:28.630873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.371 [2024-11-20 09:14:28.630903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.371 [2024-11-20 09:14:28.630912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.371 [2024-11-20 09:14:28.631078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.371 [2024-11-20 09:14:28.631244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.371 [2024-11-20 09:14:28.631251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.371 [2024-11-20 09:14:28.631256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.371 [2024-11-20 09:14:28.631262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.371 [2024-11-20 09:14:28.643004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.371 [2024-11-20 09:14:28.643557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.371 [2024-11-20 09:14:28.643587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.371 [2024-11-20 09:14:28.643596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.371 [2024-11-20 09:14:28.643762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.371 [2024-11-20 09:14:28.643915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.371 [2024-11-20 09:14:28.643921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.371 [2024-11-20 09:14:28.643927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.371 [2024-11-20 09:14:28.643932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.371 [2024-11-20 09:14:28.655676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.371 [2024-11-20 09:14:28.656167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.371 [2024-11-20 09:14:28.656182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.371 [2024-11-20 09:14:28.656187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.371 [2024-11-20 09:14:28.656338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.371 [2024-11-20 09:14:28.656489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.371 [2024-11-20 09:14:28.656494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.371 [2024-11-20 09:14:28.656499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.371 [2024-11-20 09:14:28.656504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.371 [2024-11-20 09:14:28.668375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.371 [2024-11-20 09:14:28.668988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.371 [2024-11-20 09:14:28.669019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.371 [2024-11-20 09:14:28.669028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.371 [2024-11-20 09:14:28.669202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.371 [2024-11-20 09:14:28.669356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.371 [2024-11-20 09:14:28.669362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.371 [2024-11-20 09:14:28.669368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.371 [2024-11-20 09:14:28.669377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.371 [2024-11-20 09:14:28.681120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.371 [2024-11-20 09:14:28.681620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.371 [2024-11-20 09:14:28.681635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.371 [2024-11-20 09:14:28.681640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.371 [2024-11-20 09:14:28.681792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.371 [2024-11-20 09:14:28.681942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.371 [2024-11-20 09:14:28.681948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.371 [2024-11-20 09:14:28.681953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.371 [2024-11-20 09:14:28.681957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.371 [2024-11-20 09:14:28.693836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.371 [2024-11-20 09:14:28.694384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.371 [2024-11-20 09:14:28.694415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.371 [2024-11-20 09:14:28.694423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.371 [2024-11-20 09:14:28.694589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.371 [2024-11-20 09:14:28.694743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.371 [2024-11-20 09:14:28.694749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.371 [2024-11-20 09:14:28.694754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.371 [2024-11-20 09:14:28.694760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.371 [2024-11-20 09:14:28.706574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.371 [2024-11-20 09:14:28.707182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.371 [2024-11-20 09:14:28.707212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.371 [2024-11-20 09:14:28.707221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.371 [2024-11-20 09:14:28.707389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.371 [2024-11-20 09:14:28.707543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.371 [2024-11-20 09:14:28.707549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.371 [2024-11-20 09:14:28.707554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.372 [2024-11-20 09:14:28.707560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.372 [2024-11-20 09:14:28.719296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.372 [2024-11-20 09:14:28.719743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.372 [2024-11-20 09:14:28.719757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.372 [2024-11-20 09:14:28.719763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.372 [2024-11-20 09:14:28.719914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.372 [2024-11-20 09:14:28.720072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.372 [2024-11-20 09:14:28.720079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.372 [2024-11-20 09:14:28.720084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.372 [2024-11-20 09:14:28.720090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.372 [2024-11-20 09:14:28.731968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.372 [2024-11-20 09:14:28.732524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.372 [2024-11-20 09:14:28.732554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.372 [2024-11-20 09:14:28.732563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.372 [2024-11-20 09:14:28.732732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.372 [2024-11-20 09:14:28.732886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.372 [2024-11-20 09:14:28.732892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.372 [2024-11-20 09:14:28.732898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.372 [2024-11-20 09:14:28.732904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.372 [2024-11-20 09:14:28.744649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.372 [2024-11-20 09:14:28.745163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.372 [2024-11-20 09:14:28.745194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.372 [2024-11-20 09:14:28.745202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.372 [2024-11-20 09:14:28.745371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.372 [2024-11-20 09:14:28.745524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.372 [2024-11-20 09:14:28.745530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.372 [2024-11-20 09:14:28.745536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.372 [2024-11-20 09:14:28.745541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.372 [2024-11-20 09:14:28.757283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.372 [2024-11-20 09:14:28.757791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.372 [2024-11-20 09:14:28.757822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.372 [2024-11-20 09:14:28.757833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.372 [2024-11-20 09:14:28.758000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.372 [2024-11-20 09:14:28.758153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.372 [2024-11-20 09:14:28.758167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.372 [2024-11-20 09:14:28.758173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.372 [2024-11-20 09:14:28.758179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.372 [2024-11-20 09:14:28.769925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.372 [2024-11-20 09:14:28.770498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.372 [2024-11-20 09:14:28.770528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.372 [2024-11-20 09:14:28.770537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.372 [2024-11-20 09:14:28.770703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.372 [2024-11-20 09:14:28.770856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.372 [2024-11-20 09:14:28.770863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.372 [2024-11-20 09:14:28.770868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.372 [2024-11-20 09:14:28.770873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.372 [2024-11-20 09:14:28.782621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.372 [2024-11-20 09:14:28.783194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.372 [2024-11-20 09:14:28.783224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.372 [2024-11-20 09:14:28.783233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.372 [2024-11-20 09:14:28.783401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.372 [2024-11-20 09:14:28.783554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.372 [2024-11-20 09:14:28.783561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.372 [2024-11-20 09:14:28.783566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.372 [2024-11-20 09:14:28.783572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.372 [2024-11-20 09:14:28.795319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.372 [2024-11-20 09:14:28.795891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.372 [2024-11-20 09:14:28.795921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.372 [2024-11-20 09:14:28.795930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.372 [2024-11-20 09:14:28.796096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.372 [2024-11-20 09:14:28.796259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.372 [2024-11-20 09:14:28.796266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.372 [2024-11-20 09:14:28.796271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.372 [2024-11-20 09:14:28.796277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.372 [2024-11-20 09:14:28.808011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.372 [2024-11-20 09:14:28.808561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.372 [2024-11-20 09:14:28.808591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.372 [2024-11-20 09:14:28.808600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.372 [2024-11-20 09:14:28.808766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.372 [2024-11-20 09:14:28.808919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.372 [2024-11-20 09:14:28.808925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.372 [2024-11-20 09:14:28.808930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.373 [2024-11-20 09:14:28.808936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.373 [2024-11-20 09:14:28.820688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.373 [2024-11-20 09:14:28.821311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.373 [2024-11-20 09:14:28.821342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.373 [2024-11-20 09:14:28.821350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.373 [2024-11-20 09:14:28.821518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.373 [2024-11-20 09:14:28.821672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.373 [2024-11-20 09:14:28.821678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.373 [2024-11-20 09:14:28.821684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.373 [2024-11-20 09:14:28.821690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.373 [2024-11-20 09:14:28.833435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.373 [2024-11-20 09:14:28.833984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.373 [2024-11-20 09:14:28.834014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.373 [2024-11-20 09:14:28.834022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.373 [2024-11-20 09:14:28.834196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.373 [2024-11-20 09:14:28.834350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.373 [2024-11-20 09:14:28.834356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.373 [2024-11-20 09:14:28.834362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.373 [2024-11-20 09:14:28.834371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.373 [2024-11-20 09:14:28.846104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.373 [2024-11-20 09:14:28.846676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.373 [2024-11-20 09:14:28.846707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.373 [2024-11-20 09:14:28.846715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.373 [2024-11-20 09:14:28.846882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.373 [2024-11-20 09:14:28.847035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.373 [2024-11-20 09:14:28.847041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.373 [2024-11-20 09:14:28.847046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.373 [2024-11-20 09:14:28.847052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.373 [2024-11-20 09:14:28.858795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.373 [2024-11-20 09:14:28.859375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.373 [2024-11-20 09:14:28.859405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.373 [2024-11-20 09:14:28.859414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.373 [2024-11-20 09:14:28.859580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.373 [2024-11-20 09:14:28.859733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.373 [2024-11-20 09:14:28.859739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.373 [2024-11-20 09:14:28.859745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.373 [2024-11-20 09:14:28.859750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.373 [2024-11-20 09:14:28.871492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.373 [2024-11-20 09:14:28.871933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.373 [2024-11-20 09:14:28.871947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.373 [2024-11-20 09:14:28.871953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.373 [2024-11-20 09:14:28.872104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.373 [2024-11-20 09:14:28.872261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.373 [2024-11-20 09:14:28.872267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.373 [2024-11-20 09:14:28.872272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.373 [2024-11-20 09:14:28.872277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.373 [2024-11-20 09:14:28.884175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.373 [2024-11-20 09:14:28.884636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.373 [2024-11-20 09:14:28.884650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.373 [2024-11-20 09:14:28.884655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.373 [2024-11-20 09:14:28.884806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.373 [2024-11-20 09:14:28.884957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.373 [2024-11-20 09:14:28.884963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.373 [2024-11-20 09:14:28.884968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.373 [2024-11-20 09:14:28.884972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.634 [2024-11-20 09:14:28.896864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.634 [2024-11-20 09:14:28.897318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-11-20 09:14:28.897332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.634 [2024-11-20 09:14:28.897337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.634 [2024-11-20 09:14:28.897488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.634 [2024-11-20 09:14:28.897638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.634 [2024-11-20 09:14:28.897645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.634 [2024-11-20 09:14:28.897650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.634 [2024-11-20 09:14:28.897655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.634 [2024-11-20 09:14:28.909549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.634 [2024-11-20 09:14:28.910034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-11-20 09:14:28.910047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.634 [2024-11-20 09:14:28.910052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.634 [2024-11-20 09:14:28.910206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.634 [2024-11-20 09:14:28.910361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.634 [2024-11-20 09:14:28.910367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.634 [2024-11-20 09:14:28.910372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.634 [2024-11-20 09:14:28.910377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.634 5615.00 IOPS, 21.93 MiB/s [2024-11-20T08:14:29.164Z] [2024-11-20 09:14:28.922280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.635 [2024-11-20 09:14:28.922577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-11-20 09:14:28.922591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.635 [2024-11-20 09:14:28.922600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.635 [2024-11-20 09:14:28.922751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.635 [2024-11-20 09:14:28.922901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.635 [2024-11-20 09:14:28.922907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.635 [2024-11-20 09:14:28.922912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.635 [2024-11-20 09:14:28.922917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.635 [2024-11-20 09:14:28.934949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.635 [2024-11-20 09:14:28.935390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-11-20 09:14:28.935403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.635 [2024-11-20 09:14:28.935408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.635 [2024-11-20 09:14:28.935558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.635 [2024-11-20 09:14:28.935709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.635 [2024-11-20 09:14:28.935714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.635 [2024-11-20 09:14:28.935719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.635 [2024-11-20 09:14:28.935725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.635 [2024-11-20 09:14:28.947614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.635 [2024-11-20 09:14:28.948063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-11-20 09:14:28.948075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.635 [2024-11-20 09:14:28.948080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.635 [2024-11-20 09:14:28.948234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.635 [2024-11-20 09:14:28.948385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.635 [2024-11-20 09:14:28.948391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.635 [2024-11-20 09:14:28.948396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.635 [2024-11-20 09:14:28.948400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.635 [2024-11-20 09:14:28.960288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.635 [2024-11-20 09:14:28.960736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-11-20 09:14:28.960748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.635 [2024-11-20 09:14:28.960753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.635 [2024-11-20 09:14:28.960903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.635 [2024-11-20 09:14:28.961057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.635 [2024-11-20 09:14:28.961063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.635 [2024-11-20 09:14:28.961069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.635 [2024-11-20 09:14:28.961074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.635 [2024-11-20 09:14:28.972965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.635 [2024-11-20 09:14:28.973550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-11-20 09:14:28.973581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.635 [2024-11-20 09:14:28.973590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.635 [2024-11-20 09:14:28.973759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.635 [2024-11-20 09:14:28.973912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.635 [2024-11-20 09:14:28.973919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.635 [2024-11-20 09:14:28.973924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.635 [2024-11-20 09:14:28.973930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.635 [2024-11-20 09:14:28.985708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.635 [2024-11-20 09:14:28.986178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-11-20 09:14:28.986194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.635 [2024-11-20 09:14:28.986199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.635 [2024-11-20 09:14:28.986350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.635 [2024-11-20 09:14:28.986501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.635 [2024-11-20 09:14:28.986506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.635 [2024-11-20 09:14:28.986512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.635 [2024-11-20 09:14:28.986516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.635 [2024-11-20 09:14:28.998402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.635 [2024-11-20 09:14:28.998939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-11-20 09:14:28.998969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.635 [2024-11-20 09:14:28.998978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.635 [2024-11-20 09:14:28.999143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.635 [2024-11-20 09:14:28.999304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.635 [2024-11-20 09:14:28.999311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.635 [2024-11-20 09:14:28.999320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.635 [2024-11-20 09:14:28.999326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.635 [2024-11-20 09:14:29.011080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.635 [2024-11-20 09:14:29.011569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-11-20 09:14:29.011585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.635 [2024-11-20 09:14:29.011590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.635 [2024-11-20 09:14:29.011741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.635 [2024-11-20 09:14:29.011891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.635 [2024-11-20 09:14:29.011897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.635 [2024-11-20 09:14:29.011902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.635 [2024-11-20 09:14:29.011907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.635 [2024-11-20 09:14:29.023810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.635 [2024-11-20 09:14:29.024301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-11-20 09:14:29.024315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.635 [2024-11-20 09:14:29.024321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.635 [2024-11-20 09:14:29.024471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.635 [2024-11-20 09:14:29.024621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.635 [2024-11-20 09:14:29.024627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.635 [2024-11-20 09:14:29.024632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.635 [2024-11-20 09:14:29.024637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.635 [2024-11-20 09:14:29.036527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.636 [2024-11-20 09:14:29.037015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-11-20 09:14:29.037027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.636 [2024-11-20 09:14:29.037033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.636 [2024-11-20 09:14:29.037189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.636 [2024-11-20 09:14:29.037341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.636 [2024-11-20 09:14:29.037347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.636 [2024-11-20 09:14:29.037352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.636 [2024-11-20 09:14:29.037357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.636 [2024-11-20 09:14:29.049253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.636 [2024-11-20 09:14:29.049829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-11-20 09:14:29.049860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.636 [2024-11-20 09:14:29.049868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.636 [2024-11-20 09:14:29.050034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.636 [2024-11-20 09:14:29.050197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.636 [2024-11-20 09:14:29.050204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.636 [2024-11-20 09:14:29.050209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.636 [2024-11-20 09:14:29.050215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.636 [2024-11-20 09:14:29.061967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.636 [2024-11-20 09:14:29.062467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-11-20 09:14:29.062483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.636 [2024-11-20 09:14:29.062488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.636 [2024-11-20 09:14:29.062639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.636 [2024-11-20 09:14:29.062790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.636 [2024-11-20 09:14:29.062795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.636 [2024-11-20 09:14:29.062800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.636 [2024-11-20 09:14:29.062805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.636 [2024-11-20 09:14:29.074709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.636 [2024-11-20 09:14:29.075042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-11-20 09:14:29.075057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.636 [2024-11-20 09:14:29.075062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.636 [2024-11-20 09:14:29.075218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.636 [2024-11-20 09:14:29.075369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.636 [2024-11-20 09:14:29.075375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.636 [2024-11-20 09:14:29.075380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.636 [2024-11-20 09:14:29.075385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.636 [2024-11-20 09:14:29.087430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.636 [2024-11-20 09:14:29.087911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-11-20 09:14:29.087924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.636 [2024-11-20 09:14:29.087933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.636 [2024-11-20 09:14:29.088083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.636 [2024-11-20 09:14:29.088238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.636 [2024-11-20 09:14:29.088245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.636 [2024-11-20 09:14:29.088250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.636 [2024-11-20 09:14:29.088255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.636 [2024-11-20 09:14:29.100140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.636 [2024-11-20 09:14:29.100590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-11-20 09:14:29.100603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.636 [2024-11-20 09:14:29.100608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.636 [2024-11-20 09:14:29.100758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.636 [2024-11-20 09:14:29.100909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.636 [2024-11-20 09:14:29.100914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.636 [2024-11-20 09:14:29.100919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.636 [2024-11-20 09:14:29.100924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.636 [2024-11-20 09:14:29.112814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.636 [2024-11-20 09:14:29.113231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-11-20 09:14:29.113244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.636 [2024-11-20 09:14:29.113249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.636 [2024-11-20 09:14:29.113400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.636 [2024-11-20 09:14:29.113550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.636 [2024-11-20 09:14:29.113555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.636 [2024-11-20 09:14:29.113560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.636 [2024-11-20 09:14:29.113565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.636 [2024-11-20 09:14:29.125466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.636 [2024-11-20 09:14:29.125953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-11-20 09:14:29.125966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.636 [2024-11-20 09:14:29.125971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.636 [2024-11-20 09:14:29.126121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.636 [2024-11-20 09:14:29.126279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.636 [2024-11-20 09:14:29.126286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.636 [2024-11-20 09:14:29.126291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.636 [2024-11-20 09:14:29.126296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.636 [2024-11-20 09:14:29.138186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.636 [2024-11-20 09:14:29.138676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-11-20 09:14:29.138688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.636 [2024-11-20 09:14:29.138693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.636 [2024-11-20 09:14:29.138844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.636 [2024-11-20 09:14:29.138994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.636 [2024-11-20 09:14:29.138999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.636 [2024-11-20 09:14:29.139004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.636 [2024-11-20 09:14:29.139009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.636 [2024-11-20 09:14:29.150900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.636 [2024-11-20 09:14:29.151395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-11-20 09:14:29.151408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.636 [2024-11-20 09:14:29.151413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.636 [2024-11-20 09:14:29.151563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.636 [2024-11-20 09:14:29.151713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.636 [2024-11-20 09:14:29.151719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.636 [2024-11-20 09:14:29.151725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.637 [2024-11-20 09:14:29.151730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.163619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.164072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.164084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.164090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.164244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.164395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.164400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.164411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.164415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.176315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.176861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.176891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.176899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.177065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.177227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.177241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.177246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.177252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.189006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.189510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.189526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.189531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.189682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.189833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.189838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.189843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.189849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.201739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.202068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.202083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.202089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.202244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.202395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.202401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.202406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.202410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.214446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.214769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.214783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.214788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.214939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.215090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.215096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.215102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.215106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.227155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.227617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.227629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.227635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.227785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.227936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.227942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.227947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.227951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.239863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.240469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.240501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.240509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.240675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.240829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.240835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.240840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.240846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.252591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.253181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.253211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.253223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.253389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.253542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.253549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.253554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.253559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.265314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.265933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.265964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.265972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.266138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.266300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.266307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.266313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.266319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.277923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.278389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.278404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.278409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.278560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.278710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.278716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.278722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.278726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.290608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.291065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.291078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.291083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.291237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.291392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.291398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.291403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.291408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.303279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.905 [2024-11-20 09:14:29.303847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.905 [2024-11-20 09:14:29.303877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.905 [2024-11-20 09:14:29.303886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.905 [2024-11-20 09:14:29.304053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.905 [2024-11-20 09:14:29.304215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.905 [2024-11-20 09:14:29.304222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.905 [2024-11-20 09:14:29.304228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.905 [2024-11-20 09:14:29.304234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.905 [2024-11-20 09:14:29.315979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.906 [2024-11-20 09:14:29.316579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.906 [2024-11-20 09:14:29.316610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.906 [2024-11-20 09:14:29.316619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.906 [2024-11-20 09:14:29.316786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.906 [2024-11-20 09:14:29.316939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.906 [2024-11-20 09:14:29.316945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.906 [2024-11-20 09:14:29.316951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.906 [2024-11-20 09:14:29.316956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.906 [2024-11-20 09:14:29.328729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.906 [2024-11-20 09:14:29.329278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.906 [2024-11-20 09:14:29.329308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.906 [2024-11-20 09:14:29.329317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.906 [2024-11-20 09:14:29.329486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.906 [2024-11-20 09:14:29.329639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.906 [2024-11-20 09:14:29.329645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.906 [2024-11-20 09:14:29.329655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.906 [2024-11-20 09:14:29.329661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.906 [2024-11-20 09:14:29.341443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.906 [2024-11-20 09:14:29.341942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.906 [2024-11-20 09:14:29.341956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.906 [2024-11-20 09:14:29.341962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.906 [2024-11-20 09:14:29.342113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.906 [2024-11-20 09:14:29.342269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.906 [2024-11-20 09:14:29.342275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.906 [2024-11-20 09:14:29.342280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.906 [2024-11-20 09:14:29.342285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.906 [2024-11-20 09:14:29.354163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.906 [2024-11-20 09:14:29.354635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.906 [2024-11-20 09:14:29.354648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.906 [2024-11-20 09:14:29.354653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.906 [2024-11-20 09:14:29.354803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.906 [2024-11-20 09:14:29.354954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.906 [2024-11-20 09:14:29.354959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.906 [2024-11-20 09:14:29.354964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.906 [2024-11-20 09:14:29.354969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.906 [2024-11-20 09:14:29.366845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.906 [2024-11-20 09:14:29.367217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.906 [2024-11-20 09:14:29.367230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.906 [2024-11-20 09:14:29.367235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.906 [2024-11-20 09:14:29.367386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.906 [2024-11-20 09:14:29.367535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.906 [2024-11-20 09:14:29.367541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.906 [2024-11-20 09:14:29.367546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.906 [2024-11-20 09:14:29.367551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.906 [2024-11-20 09:14:29.379577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.906 [2024-11-20 09:14:29.380067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.906 [2024-11-20 09:14:29.380080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.906 [2024-11-20 09:14:29.380085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.906 [2024-11-20 09:14:29.380238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.906 [2024-11-20 09:14:29.380389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.906 [2024-11-20 09:14:29.380394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.906 [2024-11-20 09:14:29.380399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.906 [2024-11-20 09:14:29.380404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.906 [2024-11-20 09:14:29.392278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.906 [2024-11-20 09:14:29.392817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.906 [2024-11-20 09:14:29.392847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.906 [2024-11-20 09:14:29.392856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.906 [2024-11-20 09:14:29.393021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.906 [2024-11-20 09:14:29.393181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.906 [2024-11-20 09:14:29.393188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.906 [2024-11-20 09:14:29.393194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.906 [2024-11-20 09:14:29.393200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.906 [2024-11-20 09:14:29.404937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.906 [2024-11-20 09:14:29.405406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.906 [2024-11-20 09:14:29.405422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.906 [2024-11-20 09:14:29.405428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.906 [2024-11-20 09:14:29.405578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.906 [2024-11-20 09:14:29.405728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.906 [2024-11-20 09:14:29.405734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.906 [2024-11-20 09:14:29.405739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.906 [2024-11-20 09:14:29.405744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.906 [2024-11-20 09:14:29.417627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.906 [2024-11-20 09:14:29.418113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.906 [2024-11-20 09:14:29.418126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:03.906 [2024-11-20 09:14:29.418135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:03.906 [2024-11-20 09:14:29.418289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:03.906 [2024-11-20 09:14:29.418440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.906 [2024-11-20 09:14:29.418446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.906 [2024-11-20 09:14:29.418451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.906 [2024-11-20 09:14:29.418456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 874861 Killed "${NVMF_APP[@]}" "$@" 00:29:03.906 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:03.906 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:03.906 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:03.906 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:03.906 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.168 [2024-11-20 09:14:29.430346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.168 [2024-11-20 09:14:29.430721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.168 [2024-11-20 09:14:29.430733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.168 [2024-11-20 09:14:29.430738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.168 [2024-11-20 09:14:29.430888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.168 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=876476 00:29:04.168 [2024-11-20 09:14:29.431039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.168 [2024-11-20 09:14:29.431046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.168 [2024-11-20 09:14:29.431051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.168 [2024-11-20 09:14:29.431055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.168 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 876476 00:29:04.168 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:04.168 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 876476 ']' 00:29:04.168 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.168 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.168 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.168 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.168 09:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.168 [2024-11-20 09:14:29.443078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.168 [2024-11-20 09:14:29.443587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.168 [2024-11-20 09:14:29.443604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.168 [2024-11-20 09:14:29.443609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.168 [2024-11-20 09:14:29.443760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.168 [2024-11-20 09:14:29.443911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.168 [2024-11-20 09:14:29.443916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.168 [2024-11-20 09:14:29.443921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.168 [2024-11-20 09:14:29.443926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.168 [2024-11-20 09:14:29.455805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.168 [2024-11-20 09:14:29.456230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.168 [2024-11-20 09:14:29.456243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.168 [2024-11-20 09:14:29.456248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.168 [2024-11-20 09:14:29.456398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.168 [2024-11-20 09:14:29.456548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.168 [2024-11-20 09:14:29.456554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.168 [2024-11-20 09:14:29.456559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.168 [2024-11-20 09:14:29.456563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.168 [2024-11-20 09:14:29.468448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.168 [2024-11-20 09:14:29.468891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.168 [2024-11-20 09:14:29.468903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.168 [2024-11-20 09:14:29.468908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.168 [2024-11-20 09:14:29.469058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.168 [2024-11-20 09:14:29.469214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.168 [2024-11-20 09:14:29.469220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.168 [2024-11-20 09:14:29.469226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.168 [2024-11-20 09:14:29.469231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.168 [2024-11-20 09:14:29.481118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.168 [2024-11-20 09:14:29.481563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.168 [2024-11-20 09:14:29.481575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.168 [2024-11-20 09:14:29.481581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.168 [2024-11-20 09:14:29.481735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.168 [2024-11-20 09:14:29.481885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.168 [2024-11-20 09:14:29.481891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.168 [2024-11-20 09:14:29.481896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.168 [2024-11-20 09:14:29.481901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.168 [2024-11-20 09:14:29.485506] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:29:04.168 [2024-11-20 09:14:29.485552] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.168 [2024-11-20 09:14:29.493777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.168 [2024-11-20 09:14:29.494231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.168 [2024-11-20 09:14:29.494244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.168 [2024-11-20 09:14:29.494249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.168 [2024-11-20 09:14:29.494401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.168 [2024-11-20 09:14:29.494551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.168 [2024-11-20 09:14:29.494557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.169 [2024-11-20 09:14:29.494562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.169 [2024-11-20 09:14:29.494566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.169 [2024-11-20 09:14:29.506446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.169 [2024-11-20 09:14:29.507001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.169 [2024-11-20 09:14:29.507031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.169 [2024-11-20 09:14:29.507040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.169 [2024-11-20 09:14:29.507219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.169 [2024-11-20 09:14:29.507373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.169 [2024-11-20 09:14:29.507379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.169 [2024-11-20 09:14:29.507386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.169 [2024-11-20 09:14:29.507392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.169 [2024-11-20 09:14:29.519135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.169 [2024-11-20 09:14:29.519611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.169 [2024-11-20 09:14:29.519627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.169 [2024-11-20 09:14:29.519633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.169 [2024-11-20 09:14:29.519787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.169 [2024-11-20 09:14:29.519938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.169 [2024-11-20 09:14:29.519944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.169 [2024-11-20 09:14:29.519949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.169 [2024-11-20 09:14:29.519954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.169 [2024-11-20 09:14:29.531782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.169 [2024-11-20 09:14:29.532299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.169 [2024-11-20 09:14:29.532330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.169 [2024-11-20 09:14:29.532338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.169 [2024-11-20 09:14:29.532508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.169 [2024-11-20 09:14:29.532662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.169 [2024-11-20 09:14:29.532668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.169 [2024-11-20 09:14:29.532674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.169 [2024-11-20 09:14:29.532679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.169 [2024-11-20 09:14:29.544431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.169 [2024-11-20 09:14:29.545011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.169 [2024-11-20 09:14:29.545041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.169 [2024-11-20 09:14:29.545050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.169 [2024-11-20 09:14:29.545222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.169 [2024-11-20 09:14:29.545377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.169 [2024-11-20 09:14:29.545383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.169 [2024-11-20 09:14:29.545389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.169 [2024-11-20 09:14:29.545394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.169 [2024-11-20 09:14:29.557141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.169 [2024-11-20 09:14:29.557616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.169 [2024-11-20 09:14:29.557647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.169 [2024-11-20 09:14:29.557655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.169 [2024-11-20 09:14:29.557822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.169 [2024-11-20 09:14:29.557975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.169 [2024-11-20 09:14:29.557985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.169 [2024-11-20 09:14:29.557991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.169 [2024-11-20 09:14:29.557996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.169 [2024-11-20 09:14:29.569890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.169 [2024-11-20 09:14:29.570468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.169 [2024-11-20 09:14:29.570498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.169 [2024-11-20 09:14:29.570507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.169 [2024-11-20 09:14:29.570674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.169 [2024-11-20 09:14:29.570827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.169 [2024-11-20 09:14:29.570833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.169 [2024-11-20 09:14:29.570839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.169 [2024-11-20 09:14:29.570845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.169 [2024-11-20 09:14:29.575051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:04.169 [2024-11-20 09:14:29.582608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.169 [2024-11-20 09:14:29.583094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.169 [2024-11-20 09:14:29.583110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.169 [2024-11-20 09:14:29.583116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.169 [2024-11-20 09:14:29.583272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.169 [2024-11-20 09:14:29.583423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.169 [2024-11-20 09:14:29.583429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.169 [2024-11-20 09:14:29.583436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.169 [2024-11-20 09:14:29.583441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.169 [2024-11-20 09:14:29.595378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.169 [2024-11-20 09:14:29.595860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.169 [2024-11-20 09:14:29.595873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.169 [2024-11-20 09:14:29.595879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.169 [2024-11-20 09:14:29.596031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.169 [2024-11-20 09:14:29.596185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.169 [2024-11-20 09:14:29.596192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.169 [2024-11-20 09:14:29.596197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.169 [2024-11-20 09:14:29.596207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.169 [2024-11-20 09:14:29.604278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.169 [2024-11-20 09:14:29.604301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.169 [2024-11-20 09:14:29.604307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.169 [2024-11-20 09:14:29.604313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.169 [2024-11-20 09:14:29.604317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.170 [2024-11-20 09:14:29.605456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.170 [2024-11-20 09:14:29.605610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.170 [2024-11-20 09:14:29.605612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.170 [2024-11-20 09:14:29.608089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.170 [2024-11-20 09:14:29.608579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.170 [2024-11-20 09:14:29.608593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.170 [2024-11-20 09:14:29.608598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.170 [2024-11-20 09:14:29.608749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.170 [2024-11-20 09:14:29.608900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.170 [2024-11-20 09:14:29.608906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.170 [2024-11-20 09:14:29.608911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.170 [2024-11-20 09:14:29.608916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.170 [2024-11-20 09:14:29.620805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.170 [2024-11-20 09:14:29.621437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.170 [2024-11-20 09:14:29.621472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.170 [2024-11-20 09:14:29.621481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.170 [2024-11-20 09:14:29.621654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.170 [2024-11-20 09:14:29.621808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.170 [2024-11-20 09:14:29.621814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.170 [2024-11-20 09:14:29.621820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.170 [2024-11-20 09:14:29.621826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.170 [2024-11-20 09:14:29.633475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.170 [2024-11-20 09:14:29.634085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.170 [2024-11-20 09:14:29.634117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.170 [2024-11-20 09:14:29.634126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.170 [2024-11-20 09:14:29.634314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.170 [2024-11-20 09:14:29.634469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.170 [2024-11-20 09:14:29.634475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.170 [2024-11-20 09:14:29.634481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.170 [2024-11-20 09:14:29.634487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.170 [2024-11-20 09:14:29.646233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.170 [2024-11-20 09:14:29.646597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.170 [2024-11-20 09:14:29.646614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.170 [2024-11-20 09:14:29.646620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.170 [2024-11-20 09:14:29.646772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.170 [2024-11-20 09:14:29.646923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.170 [2024-11-20 09:14:29.646929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.170 [2024-11-20 09:14:29.646934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.170 [2024-11-20 09:14:29.646939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.170 [2024-11-20 09:14:29.658988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.170 [2024-11-20 09:14:29.659566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.170 [2024-11-20 09:14:29.659597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.170 [2024-11-20 09:14:29.659606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.170 [2024-11-20 09:14:29.659778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.170 [2024-11-20 09:14:29.659931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.170 [2024-11-20 09:14:29.659938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.170 [2024-11-20 09:14:29.659944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.170 [2024-11-20 09:14:29.659950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.170 [2024-11-20 09:14:29.671700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.170 [2024-11-20 09:14:29.672204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.170 [2024-11-20 09:14:29.672226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.170 [2024-11-20 09:14:29.672232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.170 [2024-11-20 09:14:29.672390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.170 [2024-11-20 09:14:29.672542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.170 [2024-11-20 09:14:29.672553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.170 [2024-11-20 09:14:29.672559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.170 [2024-11-20 09:14:29.672564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.170 [2024-11-20 09:14:29.684321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.170 [2024-11-20 09:14:29.684881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.170 [2024-11-20 09:14:29.684911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.170 [2024-11-20 09:14:29.684919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.170 [2024-11-20 09:14:29.685086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.170 [2024-11-20 09:14:29.685246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.170 [2024-11-20 09:14:29.685253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.170 [2024-11-20 09:14:29.685259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.170 [2024-11-20 09:14:29.685265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.432 [2024-11-20 09:14:29.697007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.432 [2024-11-20 09:14:29.697651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.432 [2024-11-20 09:14:29.697681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.432 [2024-11-20 09:14:29.697690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.432 [2024-11-20 09:14:29.697857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.432 [2024-11-20 09:14:29.698010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.432 [2024-11-20 09:14:29.698016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.432 [2024-11-20 09:14:29.698022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.432 [2024-11-20 09:14:29.698027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.432 [2024-11-20 09:14:29.709632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.432 [2024-11-20 09:14:29.710223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.432 [2024-11-20 09:14:29.710254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.432 [2024-11-20 09:14:29.710262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.432 [2024-11-20 09:14:29.710432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.432 [2024-11-20 09:14:29.710585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.432 [2024-11-20 09:14:29.710591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.432 [2024-11-20 09:14:29.710596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.432 [2024-11-20 09:14:29.710606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.432 [2024-11-20 09:14:29.722357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.432 [2024-11-20 09:14:29.722946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.432 [2024-11-20 09:14:29.722977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.432 [2024-11-20 09:14:29.722986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.432 [2024-11-20 09:14:29.723155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.432 [2024-11-20 09:14:29.723328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.432 [2024-11-20 09:14:29.723335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.432 [2024-11-20 09:14:29.723341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.432 [2024-11-20 09:14:29.723347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.432 [2024-11-20 09:14:29.735087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.432 [2024-11-20 09:14:29.735582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.432 [2024-11-20 09:14:29.735613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.432 [2024-11-20 09:14:29.735622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.432 [2024-11-20 09:14:29.735789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.432 [2024-11-20 09:14:29.735942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.432 [2024-11-20 09:14:29.735949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.432 [2024-11-20 09:14:29.735955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.432 [2024-11-20 09:14:29.735961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.432 [2024-11-20 09:14:29.747709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.432 [2024-11-20 09:14:29.748261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.433 [2024-11-20 09:14:29.748291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.433 [2024-11-20 09:14:29.748300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.433 [2024-11-20 09:14:29.748469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.433 [2024-11-20 09:14:29.748622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.433 [2024-11-20 09:14:29.748629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.433 [2024-11-20 09:14:29.748634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.433 [2024-11-20 09:14:29.748640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.433 [2024-11-20 09:14:29.760388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.433 [2024-11-20 09:14:29.760988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.433 [2024-11-20 09:14:29.761018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.433 [2024-11-20 09:14:29.761027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.433 [2024-11-20 09:14:29.761200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.433 [2024-11-20 09:14:29.761354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.433 [2024-11-20 09:14:29.761361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.433 [2024-11-20 09:14:29.761366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.433 [2024-11-20 09:14:29.761372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.433 [2024-11-20 09:14:29.773107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.433 [2024-11-20 09:14:29.773338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.433 [2024-11-20 09:14:29.773353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.433 [2024-11-20 09:14:29.773359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.433 [2024-11-20 09:14:29.773510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.433 [2024-11-20 09:14:29.773660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.433 [2024-11-20 09:14:29.773666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.433 [2024-11-20 09:14:29.773671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.433 [2024-11-20 09:14:29.773676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.433 [2024-11-20 09:14:29.785852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.433 [2024-11-20 09:14:29.786457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.433 [2024-11-20 09:14:29.786487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.433 [2024-11-20 09:14:29.786496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.433 [2024-11-20 09:14:29.786662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.433 [2024-11-20 09:14:29.786816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.433 [2024-11-20 09:14:29.786822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.433 [2024-11-20 09:14:29.786828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.433 [2024-11-20 09:14:29.786833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.433 [2024-11-20 09:14:29.798585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.433 [2024-11-20 09:14:29.798824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.433 [2024-11-20 09:14:29.798839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.433 [2024-11-20 09:14:29.798844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.433 [2024-11-20 09:14:29.798999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.433 [2024-11-20 09:14:29.799152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.433 [2024-11-20 09:14:29.799161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.433 [2024-11-20 09:14:29.799167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.433 [2024-11-20 09:14:29.799172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.433 [2024-11-20 09:14:29.811198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.433 [2024-11-20 09:14:29.811536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.433 [2024-11-20 09:14:29.811548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.433 [2024-11-20 09:14:29.811554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.433 [2024-11-20 09:14:29.811705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.433 [2024-11-20 09:14:29.811855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.433 [2024-11-20 09:14:29.811861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.433 [2024-11-20 09:14:29.811866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.433 [2024-11-20 09:14:29.811870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.433 [2024-11-20 09:14:29.823901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.433 [2024-11-20 09:14:29.824359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.433 [2024-11-20 09:14:29.824371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.433 [2024-11-20 09:14:29.824377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.433 [2024-11-20 09:14:29.824528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.433 [2024-11-20 09:14:29.824678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.433 [2024-11-20 09:14:29.824684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.433 [2024-11-20 09:14:29.824689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.433 [2024-11-20 09:14:29.824694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.433 [2024-11-20 09:14:29.836570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.433 [2024-11-20 09:14:29.836909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.433 [2024-11-20 09:14:29.836923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.433 [2024-11-20 09:14:29.836929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.433 [2024-11-20 09:14:29.837079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.433 [2024-11-20 09:14:29.837235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.433 [2024-11-20 09:14:29.837244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.433 [2024-11-20 09:14:29.837250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.433 [2024-11-20 09:14:29.837254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.433 [2024-11-20 09:14:29.849279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.433 [2024-11-20 09:14:29.849849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.433 [2024-11-20 09:14:29.849879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.433 [2024-11-20 09:14:29.849888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.433 [2024-11-20 09:14:29.850055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.433 [2024-11-20 09:14:29.850215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.433 [2024-11-20 09:14:29.850222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.433 [2024-11-20 09:14:29.850227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.433 [2024-11-20 09:14:29.850233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.433 [2024-11-20 09:14:29.861973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.433 [2024-11-20 09:14:29.862513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.433 [2024-11-20 09:14:29.862543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.433 [2024-11-20 09:14:29.862552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.433 [2024-11-20 09:14:29.862718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.433 [2024-11-20 09:14:29.862871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.433 [2024-11-20 09:14:29.862877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.433 [2024-11-20 09:14:29.862883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.433 [2024-11-20 09:14:29.862888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.433 [2024-11-20 09:14:29.874645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.434 [2024-11-20 09:14:29.874998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.434 [2024-11-20 09:14:29.875013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.434 [2024-11-20 09:14:29.875018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.434 [2024-11-20 09:14:29.875173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.434 [2024-11-20 09:14:29.875325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.434 [2024-11-20 09:14:29.875330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.434 [2024-11-20 09:14:29.875336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.434 [2024-11-20 09:14:29.875344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.434 [2024-11-20 09:14:29.887367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.434 [2024-11-20 09:14:29.887918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.434 [2024-11-20 09:14:29.887948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.434 [2024-11-20 09:14:29.887956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.434 [2024-11-20 09:14:29.888123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.434 [2024-11-20 09:14:29.888282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.434 [2024-11-20 09:14:29.888289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.434 [2024-11-20 09:14:29.888295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.434 [2024-11-20 09:14:29.888301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.434 [2024-11-20 09:14:29.900042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.434 [2024-11-20 09:14:29.900609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.434 [2024-11-20 09:14:29.900640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.434 [2024-11-20 09:14:29.900648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.434 [2024-11-20 09:14:29.900815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.434 [2024-11-20 09:14:29.900968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.434 [2024-11-20 09:14:29.900974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.434 [2024-11-20 09:14:29.900980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.434 [2024-11-20 09:14:29.900985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.434 [2024-11-20 09:14:29.912734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.434 [2024-11-20 09:14:29.913233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.434 [2024-11-20 09:14:29.913248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.434 [2024-11-20 09:14:29.913254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.434 [2024-11-20 09:14:29.913405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.434 [2024-11-20 09:14:29.913556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.434 [2024-11-20 09:14:29.913561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.434 [2024-11-20 09:14:29.913566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.434 [2024-11-20 09:14:29.913571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.434 4679.17 IOPS, 18.28 MiB/s [2024-11-20T08:14:29.963Z] [2024-11-20 09:14:29.925462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.434 [2024-11-20 09:14:29.925922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.434 [2024-11-20 09:14:29.925936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.434 [2024-11-20 09:14:29.925941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.434 [2024-11-20 09:14:29.926092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.434 [2024-11-20 09:14:29.926247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.434 [2024-11-20 09:14:29.926253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.434 [2024-11-20 09:14:29.926258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.434 [2024-11-20 09:14:29.926263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.434 [2024-11-20 09:14:29.938139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.434 [2024-11-20 09:14:29.938604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.434 [2024-11-20 09:14:29.938634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.434 [2024-11-20 09:14:29.938643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.434 [2024-11-20 09:14:29.938809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.434 [2024-11-20 09:14:29.938963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.434 [2024-11-20 09:14:29.938969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.434 [2024-11-20 09:14:29.938975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.434 [2024-11-20 09:14:29.938980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.434 [2024-11-20 09:14:29.950871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.434 [2024-11-20 09:14:29.951221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.434 [2024-11-20 09:14:29.951236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.434 [2024-11-20 09:14:29.951242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.434 [2024-11-20 09:14:29.951393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.434 [2024-11-20 09:14:29.951543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.434 [2024-11-20 09:14:29.951549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.434 [2024-11-20 09:14:29.951554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.434 [2024-11-20 09:14:29.951559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.697 [2024-11-20 09:14:29.963581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.697 [2024-11-20 09:14:29.963942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.697 [2024-11-20 09:14:29.963956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.697 [2024-11-20 09:14:29.963962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.697 [2024-11-20 09:14:29.964117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.697 [2024-11-20 09:14:29.964273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.698 [2024-11-20 09:14:29.964279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.698 [2024-11-20 09:14:29.964284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.698 [2024-11-20 09:14:29.964289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.698 [2024-11-20 09:14:29.976318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.698 [2024-11-20 09:14:29.976663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.698 [2024-11-20 09:14:29.976676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.698 [2024-11-20 09:14:29.976682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.698 [2024-11-20 09:14:29.976833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.698 [2024-11-20 09:14:29.976982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.698 [2024-11-20 09:14:29.976988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.698 [2024-11-20 09:14:29.976993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.698 [2024-11-20 09:14:29.976998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.698 [2024-11-20 09:14:29.988993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.698 [2024-11-20 09:14:29.989521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.698 [2024-11-20 09:14:29.989552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.698 [2024-11-20 09:14:29.989561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.698 [2024-11-20 09:14:29.989727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.698 [2024-11-20 09:14:29.989881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.698 [2024-11-20 09:14:29.989887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.698 [2024-11-20 09:14:29.989892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.698 [2024-11-20 09:14:29.989898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.698 [2024-11-20 09:14:30.002121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.698 [2024-11-20 09:14:30.002692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.698 [2024-11-20 09:14:30.002723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.698 [2024-11-20 09:14:30.002732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.698 [2024-11-20 09:14:30.002898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.698 [2024-11-20 09:14:30.003052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.698 [2024-11-20 09:14:30.003062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.698 [2024-11-20 09:14:30.003068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.698 [2024-11-20 09:14:30.003073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.698 [2024-11-20 09:14:30.014825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.698 [2024-11-20 09:14:30.015324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.698 [2024-11-20 09:14:30.015339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.698 [2024-11-20 09:14:30.015345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.698 [2024-11-20 09:14:30.015496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.698 [2024-11-20 09:14:30.015647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.698 [2024-11-20 09:14:30.015652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.698 [2024-11-20 09:14:30.015657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.698 [2024-11-20 09:14:30.015662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.698 [2024-11-20 09:14:30.027556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.698 [2024-11-20 09:14:30.027906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.698 [2024-11-20 09:14:30.027920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.698 [2024-11-20 09:14:30.027926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.698 [2024-11-20 09:14:30.028077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.698 [2024-11-20 09:14:30.028233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.698 [2024-11-20 09:14:30.028240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.698 [2024-11-20 09:14:30.028245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.698 [2024-11-20 09:14:30.028250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.698 [2024-11-20 09:14:30.040271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.698 [2024-11-20 09:14:30.040727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.698 [2024-11-20 09:14:30.040741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.698 [2024-11-20 09:14:30.040746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.698 [2024-11-20 09:14:30.040896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.698 [2024-11-20 09:14:30.041047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.698 [2024-11-20 09:14:30.041053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.698 [2024-11-20 09:14:30.041058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.698 [2024-11-20 09:14:30.041066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.698 [2024-11-20 09:14:30.052946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.698 [2024-11-20 09:14:30.053423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.698 [2024-11-20 09:14:30.053436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.698 [2024-11-20 09:14:30.053441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.698 [2024-11-20 09:14:30.053592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.698 [2024-11-20 09:14:30.053742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.698 [2024-11-20 09:14:30.053748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.698 [2024-11-20 09:14:30.053753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.698 [2024-11-20 09:14:30.053758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.698 [2024-11-20 09:14:30.065638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.698 [2024-11-20 09:14:30.066227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.698 [2024-11-20 09:14:30.066257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.698 [2024-11-20 09:14:30.066266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.698 [2024-11-20 09:14:30.066435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.698 [2024-11-20 09:14:30.066589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.698 [2024-11-20 09:14:30.066595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.698 [2024-11-20 09:14:30.066601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.698 [2024-11-20 09:14:30.066606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.698 [2024-11-20 09:14:30.078369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.698 [2024-11-20 09:14:30.078953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.698 [2024-11-20 09:14:30.078983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.698 [2024-11-20 09:14:30.078992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.698 [2024-11-20 09:14:30.079165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.698 [2024-11-20 09:14:30.079319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.698 [2024-11-20 09:14:30.079325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.698 [2024-11-20 09:14:30.079331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.698 [2024-11-20 09:14:30.079337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.698 [2024-11-20 09:14:30.091084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.698 [2024-11-20 09:14:30.091554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.698 [2024-11-20 09:14:30.091570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.698 [2024-11-20 09:14:30.091575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.699 [2024-11-20 09:14:30.091726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.699 [2024-11-20 09:14:30.091877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.699 [2024-11-20 09:14:30.091883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.699 [2024-11-20 09:14:30.091888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.699 [2024-11-20 09:14:30.091893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.699 [2024-11-20 09:14:30.103774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.699 [2024-11-20 09:14:30.104275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.699 [2024-11-20 09:14:30.104288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.699 [2024-11-20 09:14:30.104293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.699 [2024-11-20 09:14:30.104444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.699 [2024-11-20 09:14:30.104594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.699 [2024-11-20 09:14:30.104600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.699 [2024-11-20 09:14:30.104605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.699 [2024-11-20 09:14:30.104610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.699 [2024-11-20 09:14:30.116493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.699 [2024-11-20 09:14:30.117045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.699 [2024-11-20 09:14:30.117075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.699 [2024-11-20 09:14:30.117084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.699 [2024-11-20 09:14:30.117258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.699 [2024-11-20 09:14:30.117412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.699 [2024-11-20 09:14:30.117419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.699 [2024-11-20 09:14:30.117424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.699 [2024-11-20 09:14:30.117430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.699 [2024-11-20 09:14:30.129183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.699 [2024-11-20 09:14:30.129635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.699 [2024-11-20 09:14:30.129666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.699 [2024-11-20 09:14:30.129675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.699 [2024-11-20 09:14:30.129845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.699 [2024-11-20 09:14:30.129998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.699 [2024-11-20 09:14:30.130005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.699 [2024-11-20 09:14:30.130011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.699 [2024-11-20 09:14:30.130017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.699 [2024-11-20 09:14:30.141908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.699 [2024-11-20 09:14:30.142541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.699 [2024-11-20 09:14:30.142571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.699 [2024-11-20 09:14:30.142580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.699 [2024-11-20 09:14:30.142747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.699 [2024-11-20 09:14:30.142900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.699 [2024-11-20 09:14:30.142906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.699 [2024-11-20 09:14:30.142911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.699 [2024-11-20 09:14:30.142917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.699 [2024-11-20 09:14:30.154524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.699 [2024-11-20 09:14:30.155089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.699 [2024-11-20 09:14:30.155119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.699 [2024-11-20 09:14:30.155128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.699 [2024-11-20 09:14:30.155300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.699 [2024-11-20 09:14:30.155454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.699 [2024-11-20 09:14:30.155461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.699 [2024-11-20 09:14:30.155466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.699 [2024-11-20 09:14:30.155471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.699 [2024-11-20 09:14:30.167214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.699 [2024-11-20 09:14:30.167692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.699 [2024-11-20 09:14:30.167707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.699 [2024-11-20 09:14:30.167713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.699 [2024-11-20 09:14:30.167864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.699 [2024-11-20 09:14:30.168015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.699 [2024-11-20 09:14:30.168024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.699 [2024-11-20 09:14:30.168029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.699 [2024-11-20 09:14:30.168034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.699 [2024-11-20 09:14:30.179955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.699 [2024-11-20 09:14:30.180164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.699 [2024-11-20 09:14:30.180177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.699 [2024-11-20 09:14:30.180183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.699 [2024-11-20 09:14:30.180333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.699 [2024-11-20 09:14:30.180483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.699 [2024-11-20 09:14:30.180489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.699 [2024-11-20 09:14:30.180494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.699 [2024-11-20 09:14:30.180499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.699 [2024-11-20 09:14:30.192660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.699 [2024-11-20 09:14:30.193208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.699 [2024-11-20 09:14:30.193238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.699 [2024-11-20 09:14:30.193247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.699 [2024-11-20 09:14:30.193416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.699 [2024-11-20 09:14:30.193570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.699 [2024-11-20 09:14:30.193576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.699 [2024-11-20 09:14:30.193581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.699 [2024-11-20 09:14:30.193588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.699 [2024-11-20 09:14:30.205342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.699 [2024-11-20 09:14:30.205925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.699 [2024-11-20 09:14:30.205956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.699 [2024-11-20 09:14:30.205965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.699 [2024-11-20 09:14:30.206131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.699 [2024-11-20 09:14:30.206290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.699 [2024-11-20 09:14:30.206298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.699 [2024-11-20 09:14:30.206303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.699 [2024-11-20 09:14:30.206312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.699 [2024-11-20 09:14:30.218053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.699 [2024-11-20 09:14:30.218640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.699 [2024-11-20 09:14:30.218671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.700 [2024-11-20 09:14:30.218680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.700 [2024-11-20 09:14:30.218846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.700 [2024-11-20 09:14:30.219000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.700 [2024-11-20 09:14:30.219007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.700 [2024-11-20 09:14:30.219012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.700 [2024-11-20 09:14:30.219017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.961 [2024-11-20 09:14:30.230769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.961 [2024-11-20 09:14:30.231133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.961 [2024-11-20 09:14:30.231148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.961 [2024-11-20 09:14:30.231153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.961 [2024-11-20 09:14:30.231309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.961 [2024-11-20 09:14:30.231460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.961 [2024-11-20 09:14:30.231465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.961 [2024-11-20 09:14:30.231471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.961 [2024-11-20 09:14:30.231475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.961 [2024-11-20 09:14:30.243497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.962 [2024-11-20 09:14:30.243931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.962 [2024-11-20 09:14:30.243961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.962 [2024-11-20 09:14:30.243969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.962 [2024-11-20 09:14:30.244137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.962 [2024-11-20 09:14:30.244298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.962 [2024-11-20 09:14:30.244306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.962 [2024-11-20 09:14:30.244312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.962 [2024-11-20 09:14:30.244317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.962 [2024-11-20 09:14:30.256204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.962 [2024-11-20 09:14:30.256703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.962 [2024-11-20 09:14:30.256717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.962 [2024-11-20 09:14:30.256723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.962 [2024-11-20 09:14:30.256873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.962 [2024-11-20 09:14:30.257024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.962 [2024-11-20 09:14:30.257029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.962 [2024-11-20 09:14:30.257034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.962 [2024-11-20 09:14:30.257039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.962 [2024-11-20 09:14:30.268923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.962 [2024-11-20 09:14:30.269391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.962 [2024-11-20 09:14:30.269404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.962 [2024-11-20 09:14:30.269410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.962 [2024-11-20 09:14:30.269560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.962 [2024-11-20 09:14:30.269710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.962 [2024-11-20 09:14:30.269716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.962 [2024-11-20 09:14:30.269721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.962 [2024-11-20 09:14:30.269725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.962 [2024-11-20 09:14:30.281610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.962 [2024-11-20 09:14:30.282070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.962 [2024-11-20 09:14:30.282083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.962 [2024-11-20 09:14:30.282088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.962 [2024-11-20 09:14:30.282243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.962 [2024-11-20 09:14:30.282394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.962 [2024-11-20 09:14:30.282400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.962 [2024-11-20 09:14:30.282405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.962 [2024-11-20 09:14:30.282409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.962 [2024-11-20 09:14:30.294287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.962 [2024-11-20 09:14:30.294747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.962 [2024-11-20 09:14:30.294778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.962 [2024-11-20 09:14:30.294786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.962 [2024-11-20 09:14:30.294956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.962 [2024-11-20 09:14:30.295111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.962 [2024-11-20 09:14:30.295118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.962 [2024-11-20 09:14:30.295123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.962 [2024-11-20 09:14:30.295131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.962 [2024-11-20 09:14:30.307025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.962 [2024-11-20 09:14:30.307599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.962 [2024-11-20 09:14:30.307630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.962 [2024-11-20 09:14:30.307639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.962 [2024-11-20 09:14:30.307805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.962 [2024-11-20 09:14:30.307959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.962 [2024-11-20 09:14:30.307965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.962 [2024-11-20 09:14:30.307971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.962 [2024-11-20 09:14:30.307976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.962 [2024-11-20 09:14:30.319728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.962 [2024-11-20 09:14:30.320201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.962 [2024-11-20 09:14:30.320217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.962 [2024-11-20 09:14:30.320223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.962 [2024-11-20 09:14:30.320374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.962 [2024-11-20 09:14:30.320524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.962 [2024-11-20 09:14:30.320530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.962 [2024-11-20 09:14:30.320535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.962 [2024-11-20 09:14:30.320540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.962 [2024-11-20 09:14:30.332431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.962 [2024-11-20 09:14:30.332891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.962 [2024-11-20 09:14:30.332904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.962 [2024-11-20 09:14:30.332909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.962 [2024-11-20 09:14:30.333060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.962 [2024-11-20 09:14:30.333095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.962 [2024-11-20 09:14:30.333214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.962 [2024-11-20 09:14:30.333221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.962 [2024-11-20 09:14:30.333226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.962 [2024-11-20 09:14:30.333231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.962 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.962 [2024-11-20 09:14:30.345105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.962 [2024-11-20 09:14:30.345689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.962 [2024-11-20 09:14:30.345720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.962 [2024-11-20 09:14:30.345729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.962 [2024-11-20 09:14:30.345895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.963 [2024-11-20 09:14:30.346049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.963 [2024-11-20 09:14:30.346055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.963 [2024-11-20 09:14:30.346061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.963 [2024-11-20 09:14:30.346066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.963 [2024-11-20 09:14:30.357813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.963 [2024-11-20 09:14:30.358465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.963 [2024-11-20 09:14:30.358495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.963 [2024-11-20 09:14:30.358504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.963 [2024-11-20 09:14:30.358670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.963 [2024-11-20 09:14:30.358824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.963 [2024-11-20 09:14:30.358830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.963 [2024-11-20 09:14:30.358840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.963 [2024-11-20 09:14:30.358846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.963 Malloc0 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.963 [2024-11-20 09:14:30.370481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.963 [2024-11-20 09:14:30.370985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.963 [2024-11-20 09:14:30.370999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.963 [2024-11-20 09:14:30.371005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.963 [2024-11-20 09:14:30.371156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.963 [2024-11-20 09:14:30.371313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.963 [2024-11-20 09:14:30.371318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.963 [2024-11-20 09:14:30.371323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.963 [2024-11-20 09:14:30.371328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.963 [2024-11-20 09:14:30.383215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.963 [2024-11-20 09:14:30.383706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.963 [2024-11-20 09:14:30.383719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24af000 with addr=10.0.0.2, port=4420 00:29:04.963 [2024-11-20 09:14:30.383724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24af000 is same with the state(6) to be set 00:29:04.963 [2024-11-20 09:14:30.383875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af000 (9): Bad file descriptor 00:29:04.963 [2024-11-20 09:14:30.384025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.963 [2024-11-20 09:14:30.384031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.963 [2024-11-20 09:14:30.384036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.963 [2024-11-20 09:14:30.384041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.963 [2024-11-20 09:14:30.394406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.963 [2024-11-20 09:14:30.395920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.963 09:14:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 875238 00:29:04.963 [2024-11-20 09:14:30.463541] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:06.473 4847.00 IOPS, 18.93 MiB/s [2024-11-20T08:14:32.947Z] 5850.38 IOPS, 22.85 MiB/s [2024-11-20T08:14:34.007Z] 6638.89 IOPS, 25.93 MiB/s [2024-11-20T08:14:34.948Z] 7265.90 IOPS, 28.38 MiB/s [2024-11-20T08:14:36.332Z] 7783.73 IOPS, 30.41 MiB/s [2024-11-20T08:14:37.273Z] 8215.08 IOPS, 32.09 MiB/s [2024-11-20T08:14:38.214Z] 8585.69 IOPS, 33.54 MiB/s [2024-11-20T08:14:39.155Z] 8905.79 IOPS, 34.79 MiB/s [2024-11-20T08:14:39.155Z] 9180.80 IOPS, 35.86 MiB/s 00:29:13.626 Latency(us) 00:29:13.626 [2024-11-20T08:14:39.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.626 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:13.626 Verification LBA range: start 0x0 length 0x4000 00:29:13.626 Nvme1n1 : 15.01 9184.10 35.88 13393.11 0.00 5651.24 556.37 12178.77 00:29:13.626 [2024-11-20T08:14:39.155Z] =================================================================================================================== 00:29:13.626 [2024-11-20T08:14:39.155Z] Total : 9184.10 35.88 13393.11 0.00 5651.24 556.37 12178.77 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.626 rmmod nvme_tcp 00:29:13.626 rmmod nvme_fabrics 00:29:13.626 rmmod nvme_keyring 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 876476 ']' 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 876476 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 876476 ']' 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 876476 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.626 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 876476 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 876476' 00:29:13.887 killing process with pid 876476 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 876476 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 876476 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.887 09:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.433 00:29:16.433 real 0m28.240s 00:29:16.433 user 1m3.389s 00:29:16.433 sys 0m7.638s 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.433 ************************************ 00:29:16.433 END TEST nvmf_bdevperf 00:29:16.433 ************************************ 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.433 ************************************ 00:29:16.433 START TEST nvmf_target_disconnect 00:29:16.433 ************************************ 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:16.433 * Looking for test storage... 00:29:16.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:16.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.433 --rc genhtml_branch_coverage=1 00:29:16.433 --rc genhtml_function_coverage=1 00:29:16.433 --rc genhtml_legend=1 00:29:16.433 --rc geninfo_all_blocks=1 00:29:16.433 --rc geninfo_unexecuted_blocks=1 00:29:16.433 00:29:16.433 ' 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:16.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.433 --rc genhtml_branch_coverage=1 00:29:16.433 --rc genhtml_function_coverage=1 00:29:16.433 --rc genhtml_legend=1 00:29:16.433 --rc geninfo_all_blocks=1 00:29:16.433 --rc geninfo_unexecuted_blocks=1 00:29:16.433 00:29:16.433 ' 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:16.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.433 --rc genhtml_branch_coverage=1 00:29:16.433 --rc genhtml_function_coverage=1 00:29:16.433 --rc genhtml_legend=1 00:29:16.433 --rc geninfo_all_blocks=1 00:29:16.433 --rc geninfo_unexecuted_blocks=1 00:29:16.433 00:29:16.433 ' 00:29:16.433 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:16.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.433 --rc genhtml_branch_coverage=1 00:29:16.433 --rc genhtml_function_coverage=1 00:29:16.433 --rc genhtml_legend=1 00:29:16.433 --rc geninfo_all_blocks=1 00:29:16.433 --rc geninfo_unexecuted_blocks=1 00:29:16.433 00:29:16.433 ' 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.434 09:14:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:24.574 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.574 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.574 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:24.575 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:24.575 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:24.575 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:24.575 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.575 09:14:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.575 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.575 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:29:24.576 00:29:24.576 --- 10.0.0.2 ping statistics --- 00:29:24.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.576 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:29:24.576 00:29:24.576 --- 10.0.0.1 ping statistics --- 00:29:24.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.576 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:24.576 ************************************ 00:29:24.576 START TEST nvmf_target_disconnect_tc1 00:29:24.576 ************************************ 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:24.576 [2024-11-20 09:14:49.446671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.576 [2024-11-20 09:14:49.446746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736ad0 with addr=10.0.0.2, port=4420 00:29:24.576 [2024-11-20 09:14:49.446771] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:24.576 [2024-11-20 09:14:49.446784] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:24.576 [2024-11-20 09:14:49.446792] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:24.576 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:24.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:24.576 Initializing NVMe Controllers 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:24.576 00:29:24.576 real 0m0.146s 00:29:24.576 user 0m0.059s 00:29:24.576 sys 0m0.087s 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:24.576 ************************************ 00:29:24.576 END TEST nvmf_target_disconnect_tc1 00:29:24.576 ************************************ 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:24.576 ************************************ 00:29:24.576 START TEST nvmf_target_disconnect_tc2 00:29:24.576 ************************************ 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=882623 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 882623 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 882623 ']' 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.576 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.577 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.577 09:14:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.577 [2024-11-20 09:14:49.608006] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:29:24.577 [2024-11-20 09:14:49.608065] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.577 [2024-11-20 09:14:49.706026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.577 [2024-11-20 09:14:49.757807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.577 [2024-11-20 09:14:49.757857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.577 [2024-11-20 09:14:49.757866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.577 [2024-11-20 09:14:49.757873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.577 [2024-11-20 09:14:49.757879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.577 [2024-11-20 09:14:49.760243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:24.577 [2024-11-20 09:14:49.760380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:24.577 [2024-11-20 09:14:49.760540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:24.577 [2024-11-20 09:14:49.760541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.148 Malloc0 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.148 [2024-11-20 09:14:50.526826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.148 [2024-11-20 09:14:50.567227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=882699 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:25.148 09:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:27.722 09:14:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 882623 00:29:27.722 09:14:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Write completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Write completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Write completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Write completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Write completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Write completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Write completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Write completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Write completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 [2024-11-20 09:14:52.606086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.722 Read completed with error (sct=0, sc=8) 00:29:27.722 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Write completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Write completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Write completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Write completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Write completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Write completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Write completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Write completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 Read completed with error (sct=0, sc=8) 00:29:27.723 starting I/O failed 00:29:27.723 [2024-11-20 09:14:52.606481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.723 [2024-11-20 09:14:52.606912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.606933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.607246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.607282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.607668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.607681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.607917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.607930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.608114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.608129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.608275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.608289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.608626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.608642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.608875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.608889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.609243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.609256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.609597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.609616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.609933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.609947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.610277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.610290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.610404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.610416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.610773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.610786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.611094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.611108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.611348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.611364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.611705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.611718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.612040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.612055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.612412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.612427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.612611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.612625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.612972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.612985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.613273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.613288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.613586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.613600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.613950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.613964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.614269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.614284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.614496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.614510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.614828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.614844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.615151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.615168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.615472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.615487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.615842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.723 [2024-11-20 09:14:52.615855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.723 qpair failed and we were unable to recover it. 00:29:27.723 [2024-11-20 09:14:52.616215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.616230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.616360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.616373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.616714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.616728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.617080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.617095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.617435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.617450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.617803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.617817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.618151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.618169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.618475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.618489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.618800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.618814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.619173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.619188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.619465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.619479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.619829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.619844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.620176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.620190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.620542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.620554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.620899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.620912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.621278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.621293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.621657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.621672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.621983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.621995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.622557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.622574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.622916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.622930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.623286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.623304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.623654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.623668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.623986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.624000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.624327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.624340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.624649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.624663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.625013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.625027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.625338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.625350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.625663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.625676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.626020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.626033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.626267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.626280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.627444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.627483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.627822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.627839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.628196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.628212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.628528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.628543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.628762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.628776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.629095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.629108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.629440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.629454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.629751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.724 [2024-11-20 09:14:52.629763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.724 qpair failed and we were unable to recover it. 00:29:27.724 [2024-11-20 09:14:52.630148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.630186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.630510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.630524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.630866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.630880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.631210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.631224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.631580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.631595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.631871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.631884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.632201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.632215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.632534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.632546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.632766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.632779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.633113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.633131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.633477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.633492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.633822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.633839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.634143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.634169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.634390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.634407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.634768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.634784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.635097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.635113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.635411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.635427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.635738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.635756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.636073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.636088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.636317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.636332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.636681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.636696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.637001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.637015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.637415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.637431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.637742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.637759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.638130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.638145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.638508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.638525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.638828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.638843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.639185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.639202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.639532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.639548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.639857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.639874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.640190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.640207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.640518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.640533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.640852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.640866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.641191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.641209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.641518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.641534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.641763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.641778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.642095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.642110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.725 [2024-11-20 09:14:52.642432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.725 [2024-11-20 09:14:52.642449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.725 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.642766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.642782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.643084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.643101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.643330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.643348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.643670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.643687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.643995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.644011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.644337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.644353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.644664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.644681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.644993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.645009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.645340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.645358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.645660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.645675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.646032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.646052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.646437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.646455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.646756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.646776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.647095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.647112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.647465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.647483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.647785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.647803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.648017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.648035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.648370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.648390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.648706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.648724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.649060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.649077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.649398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.649416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.649771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.649789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.650000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.650018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.650258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.650277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.650641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.650660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.651001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.651021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.651363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.651382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.651673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.651691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.651897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.651915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.652156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.652190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.652551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.652569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.652911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.652929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.653260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.653279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.653644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.653664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.654016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.654033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 [2024-11-20 09:14:52.654145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.726 [2024-11-20 09:14:52.654170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:27.726 qpair failed and we were unable to recover it. 00:29:27.726 Read completed with error (sct=0, sc=8) 00:29:27.726 starting I/O failed 00:29:27.726 Read completed with error (sct=0, sc=8) 00:29:27.726 starting I/O failed 00:29:27.726 Read completed with error (sct=0, sc=8) 00:29:27.726 starting I/O failed 00:29:27.726 Read completed with error (sct=0, sc=8) 00:29:27.726 starting I/O failed 00:29:27.726 Read completed with error (sct=0, sc=8) 00:29:27.726 starting I/O failed 00:29:27.726 Read completed with error (sct=0, sc=8) 00:29:27.726 starting I/O failed 00:29:27.726 Read completed with error (sct=0, sc=8) 00:29:27.726 starting I/O failed 00:29:27.726 Read completed with error (sct=0, sc=8) 00:29:27.726 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Write completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Write completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Write completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Write completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Write completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Write completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Write completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Write completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 Read completed with error (sct=0, sc=8) 00:29:27.727 starting I/O failed 00:29:27.727 [2024-11-20 09:14:52.654682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:27.727 [2024-11-20 09:14:52.655114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.655199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.655607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.655628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.655920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.655938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.656413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.656490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.656787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.656810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.657136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.657155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.657512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.657530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.657829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.657845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.658197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.658216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.658433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.658451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.658822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.658840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.659137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.659155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.659314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.659331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.659566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.659583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.659836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.659853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.660250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.660269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.660499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.660516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.660865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.660882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.661106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.661122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.661494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.661512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.661813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.661830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.662049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.662065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.662391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.662409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.727 [2024-11-20 09:14:52.662747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.727 [2024-11-20 09:14:52.662768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.727 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.662979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.662996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.663223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.663243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.663584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.663602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.663931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.663950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.664298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.664316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.664645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.664663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.664875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.664893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.665111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.665128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.665462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.665480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.665688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.665705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.666040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.666058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.666379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.666397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.666728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.666746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.667102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.667122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.667486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.667504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.667828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.667847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.668190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.668207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.668535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.668552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.668893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.668910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.669243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.669263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.669587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.669605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.670010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.670028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.670338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.670355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.670689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.670706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.671111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.671129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.671397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.671415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.671728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.671746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.672078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.672097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.672488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.672506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.672766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.672783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.673011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.673030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.673266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.673284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.673616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.673635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.673966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.673983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.674214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.674235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.674647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.674667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.674882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.674898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.728 [2024-11-20 09:14:52.675177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.728 [2024-11-20 09:14:52.675195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.728 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.675559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.675577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.675922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.675943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.676152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.676175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.676542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.676560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.676908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.676928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.677268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.677287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.677623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.677643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.677991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.678010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.678276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.678294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.678615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.678632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.678973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.678992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.679337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.679355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.679699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.679717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.680055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.680072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.680441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.680461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.680814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.680833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.681190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.681208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.681562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.681580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.681917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.681936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.682287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.682307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.682648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.682667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.683006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.683024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.683326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.683344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.683695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.683712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.684020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.684038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.684276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.684294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.684721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.684739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.685080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.685098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.685472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.685490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.685829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.685847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.686189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.686208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.686494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.686511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.686850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.686867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.687202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.687222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.687578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.687595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.687934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.687952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.729 qpair failed and we were unable to recover it. 00:29:27.729 [2024-11-20 09:14:52.688283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.729 [2024-11-20 09:14:52.688302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.688653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.688672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.689010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.689028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.689250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.689267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.689502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.689522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.689864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.689884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.690225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.690245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.690587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.690605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.690941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.690960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.691203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.691221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.691562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.691579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.691917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.691934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.692314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.692332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.692681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.692698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.693033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.693052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.693393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.693411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.693677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.693694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.694030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.694046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.694384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.694403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.694753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.694770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.695133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.695151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.695374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.695391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.695723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.695739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.696124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.696142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.696478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.696496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.696710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.696727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.697070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.697088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.697485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.697503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.697836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.697854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.698193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.698211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.698540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.698558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.698883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.698901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.699228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.699246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.699592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.699610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.699947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.699964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.700175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.700195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.700420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.730 [2024-11-20 09:14:52.700438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.730 qpair failed and we were unable to recover it. 00:29:27.730 [2024-11-20 09:14:52.700766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.700783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.701121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.701140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.701493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.701510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.701854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.701872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.702190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.702208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.702572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.702591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.702921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.702937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.703282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.703299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.703651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.703673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.704011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.704028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.704378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.704397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.704712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.704731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.705060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.705078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.705405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.705423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.705755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.705772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.706117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.706134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.706450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.706468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.706796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.706813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.707153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.707179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.707516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.707533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.707872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.707891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.708228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.708247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.708587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.708604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.708920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.708936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.709272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.709289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.709634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.709653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.709993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.710010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.710323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.710341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.710684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.710701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.711038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.711057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.711379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.711397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.711634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.711650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.711951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.711968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.712319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.731 [2024-11-20 09:14:52.712338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.731 qpair failed and we were unable to recover it. 00:29:27.731 [2024-11-20 09:14:52.712549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.712565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.712874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.712891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.713114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.713130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.713464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.713482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.713876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.713893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.714234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.714253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.714489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.714507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.714866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.714883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.715245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.715264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.715601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.715620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.715913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.715929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.716273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.716293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.716637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.716654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.717001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.717020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.717208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.717236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.717600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.717618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.717955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.717972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.718208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.718225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.718570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.718588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.718934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.718953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.719214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.719231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.719582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.719601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.719938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.719955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.720254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.720271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.720529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.720546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.720906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.720925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.721295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.721313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.721698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.721716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.722048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.722065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.722383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.722400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.722751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.722770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.722988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.723005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.723397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.723415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.723756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.723775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.724092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.724110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.724432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.724451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.724787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.732 [2024-11-20 09:14:52.724807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.732 qpair failed and we were unable to recover it. 00:29:27.732 [2024-11-20 09:14:52.725168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.725188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.725427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.725444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.725771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.725790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.726132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.726149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.726509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.726529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.726874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.726891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.727211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.727231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.727621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.727638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.727864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.727879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.728240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.728258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.728594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.728611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.728947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.728963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.729203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.729221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.729483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.729499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.729892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.729909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.730224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.730243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.730607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.730623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.730993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.731014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.731389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.731408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.731701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.731718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.732053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.732071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.732482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.732500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.732829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.732847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.733063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.733083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.733311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.733329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.733653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.733672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.734012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.734031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.734367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.734384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.734706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.734725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.735063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.735079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.735423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.735444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.735783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.735800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.736214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.736233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.736568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.736585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.736969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.736985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.737212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.737230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.737564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.737581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.737931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.733 [2024-11-20 09:14:52.737950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.733 qpair failed and we were unable to recover it. 00:29:27.733 [2024-11-20 09:14:52.738280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.738298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.738673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.738690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.739036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.739055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.739429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.739447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.739790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.739809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.740151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.740179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.740522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.740541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.740855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.740872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.741221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.741241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.741468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.741485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.741825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.741841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.742184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.742202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.742553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.742571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.742914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.742933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.743193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.743212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.743566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.743584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.743921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.743939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.744292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.744312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.744669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.744685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.745070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.745091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.745366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.745384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.745719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.745735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.746048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.746067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.746391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.746408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.746749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.746767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.747163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.747181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.747593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.747611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.747966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.747983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.748322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.748340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.748712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.748731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.749068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.749085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.749411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.749429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.749771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.749790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.750132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.750150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.750388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.750406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.750715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.750734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.751071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.751089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.751412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.734 [2024-11-20 09:14:52.751430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.734 qpair failed and we were unable to recover it. 00:29:27.734 [2024-11-20 09:14:52.751636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.751656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.752040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.752058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.752399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.752418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.752758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.752776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.753173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.753192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.753589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.753606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.753951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.753971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.754345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.754364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.754700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.754719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.755054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.755071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.755393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.755410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.755595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.755614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.755966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.755985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.756305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.756323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.756652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.756671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.757011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.757028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.757359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.757376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.757710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.757727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.758068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.758086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.758413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.758431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.758754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.758773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.759124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.759147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.759432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.759449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.759676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.759694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.759993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.760011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.760343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.760360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.760685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.760703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.761053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.761070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.761299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.761315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.761644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.761662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.762076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.762094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.762403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.762420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.762755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.762772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.763116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.763135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.763491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.763508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.763708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.763726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.764084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.764101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.735 [2024-11-20 09:14:52.764493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.735 [2024-11-20 09:14:52.764511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.735 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.764729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.764745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.765136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.765154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.765374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.765390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.765728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.765746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.766077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.766095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.766422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.766440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.766767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.766785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.767122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.767138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.767510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.767529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.767923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.767941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.768246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.768264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.768497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.768513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.768916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.768934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.769189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.769206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.769432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.769449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.769782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.769799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.770138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.770156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.770501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.770520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.770839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.770856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.771203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.771222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.771560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.771577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.771927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.771946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.772357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.772375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.772711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.772733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.773062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.773079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.773499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.773517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.773777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.773793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.774138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.774155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.774620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.774637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.774975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.774994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.775216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.775234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.775452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.775468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.775823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.775840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.736 qpair failed and we were unable to recover it. 00:29:27.736 [2024-11-20 09:14:52.776172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.736 [2024-11-20 09:14:52.776189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.776558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.776575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.776918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.776937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.777175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.777194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.777511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.777530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.777871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.777888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.778209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.778226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.778500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.778517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.778868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.778887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.779136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.779154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.779305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.779323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.779630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.779648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.779982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.780000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.780379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.780398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.780738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.780755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.780945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.780962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.781288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.781305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.781689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.781706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.782027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.782044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.782391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.782410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.782749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.782768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.783106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.783124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.783479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.783496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.783828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.783847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.784231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.784248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.784611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.784630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.785021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.785038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.785309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.785326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.785567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.785584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.785901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.785917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.786176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.786200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.786588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.786606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.786937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.786954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.787285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.787305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.787651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.787669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.788023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.788041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.788366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.788384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.788688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-11-20 09:14:52.788705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.737 qpair failed and we were unable to recover it. 00:29:27.737 [2024-11-20 09:14:52.789048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.789065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.789280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.789297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.789471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.789486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.789819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.789836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.790154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.790179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.790472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.790489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.790835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.790854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.791199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.791216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.791627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.791644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.791983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.792002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.792345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.792362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.792594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.792610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.792814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.792834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.793076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.793094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.793335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.793352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.793603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.793621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.793978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.793997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.794223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.794241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.794577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.794596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.794912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.794933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.795221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.795238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.795578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.795595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.795903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.795922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.796258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.796277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.796635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.796652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.796971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.796988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.797264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.797281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.797612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.797632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.797882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.797900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.798249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.798268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.798508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.798525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.798855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.798875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.799095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.799113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.799459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.799478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.799792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.799811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.800174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.800193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.800535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.800553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.800894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-11-20 09:14:52.800912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.738 qpair failed and we were unable to recover it. 00:29:27.738 [2024-11-20 09:14:52.801029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.801048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.801683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.801808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.802192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.802238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.802640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.802659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.802867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.802884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.803067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.803086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.803432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.803451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.803788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.803807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.804140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.804164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.804462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.804478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.804752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.804768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.805087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.805105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.805419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.805436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.805760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.805780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.806131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.806148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.806393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.806412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.806701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.806718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.807035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.807054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.807401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.807419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.807769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.807788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.808081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.808099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.808368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.808390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.808728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.808747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.809068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.809086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.809406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.809425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.809770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.809787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.810000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.810018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.810361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.810378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.810730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.810750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.811089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.811105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.811417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.811437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.811773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.811789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.812130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.812149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.812428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.812447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.812800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.812818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.813155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.813181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.813501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.813520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.739 [2024-11-20 09:14:52.813866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.739 [2024-11-20 09:14:52.813883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.739 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.814190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.814208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.814634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.814651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.814964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.814983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.815395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.815412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.815753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.815772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.816113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.816131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.816457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.816475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.816813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.816831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.817172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.817192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.817544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.817561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.817902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.817922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.818293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.818312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.818504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.818522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.818863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.818880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.819221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.819240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.819641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.819658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.819887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.819903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.820215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.820232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.820608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.820625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.820966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.820985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.821339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.821357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.821689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.821708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.822063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.822080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.822429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.822451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.822787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.822803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.823136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.823154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.823516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.823533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.823849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.823868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.824204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.824222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.824555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.824574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.824891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.824910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.825192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.825210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.825598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.825616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.825949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.825968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.826347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.826366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.826705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.826724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.827062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.827080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.740 qpair failed and we were unable to recover it. 00:29:27.740 [2024-11-20 09:14:52.827465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.740 [2024-11-20 09:14:52.827483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.827826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.827844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.828056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.828073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.828465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.828483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.828828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.828847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.829071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.829088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.829382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.829399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.829745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.829762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.830115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.830134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.830494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.830511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.830740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.830756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.831086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.831103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.831382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.831400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.831653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.831670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.832015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.832033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.832365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.832382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.832730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.832749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.833187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.833205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.833440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.833456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.833807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.833824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.834026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.834044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.834399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.834418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.834708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.834725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.835060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.835077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.835265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.835284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.835622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.835639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.835956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.835980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.836264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.836282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.836490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.836507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.836710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.836729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.836943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.836962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.837172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.837189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.837424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.837443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.741 [2024-11-20 09:14:52.837772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.741 [2024-11-20 09:14:52.837790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.741 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.838008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.838024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.838347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.838365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.838706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.838725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.839050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.839067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.839408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.839428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.839745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.839764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.840109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.840126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.840467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.840485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.840797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.840815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.841185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.841204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.841549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.841567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.841880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.841900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.842223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.842240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.842501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.842517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.842843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.842860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.842973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.842987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.843298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.843317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.843667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.843684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.844029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.844048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.844468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.844486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.844707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.844723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.845036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.845053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.845386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.845405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.845754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.845771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.846112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.846132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.846343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.846363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.846694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.846712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.846928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.846944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.847288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.847306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.847652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.847671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.848012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.848030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.848341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.848359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.848697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.848717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.849056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.849073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.849419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.849438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.849759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.849776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.850091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.850109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.742 [2024-11-20 09:14:52.850378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.742 [2024-11-20 09:14:52.850395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.742 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.850688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.850706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.851014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.851030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.851137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.851154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.851465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.851482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.851691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.851708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.852070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.852089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.852422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.852440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.852773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.852792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.853136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.853154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.853479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.853498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.853836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.853853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.854186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.854206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.854548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.854565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.854901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.854920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.855119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.855139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.855345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.855365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.855696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.855714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.856053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.856072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.856441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.856457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.856786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.856803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.857104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.857122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.857271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.857289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.857665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.857682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.858016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.858032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.858256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.858273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.858511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.858527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.858847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.858864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.859136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.859154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.859466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.859483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.859697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.859715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.860074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.860092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.860198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.860217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.860573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.860589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.860797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.860814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.861028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.861048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.861413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.861432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.861772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.861790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.743 [2024-11-20 09:14:52.862091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.743 [2024-11-20 09:14:52.862109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.743 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.862341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.862361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.862723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.862741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.862977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.862995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.863238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.863256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.863652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.863671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.864005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.864028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.864354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.864372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.864719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.864736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.865173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.865191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.865530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.865548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.865891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.865908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.866133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.866149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.866474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.866492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.866802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.866818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.867013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.867031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.867332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.867351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.867691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.867710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.868010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.868028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.868362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.868382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.868727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.868743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.869088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.869106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.869346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.869365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.869726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.869743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.870085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.870103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.870449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.870467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.870811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.870830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.871169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.871189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.871532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.871549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.871752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.871769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.872103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.872120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.872469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.872487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.872789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.872806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.873036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.873052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.873374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.873393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.873729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.873746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.873961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.873978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.874222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.874243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.744 qpair failed and we were unable to recover it. 00:29:27.744 [2024-11-20 09:14:52.874574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-20 09:14:52.874591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.874795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.874810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.875167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.875186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.875523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.875540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.875861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.875881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.876212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.876230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.876596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.876614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.876863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.876879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.877219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.877237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.877576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.877593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.877781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.877800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.878151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.878186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.878368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.878385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.878727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.878744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.879113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.879129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.879459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.879476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.879811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.879828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.880172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.880192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.880524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.880540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.880761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.880780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.881096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.881113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.881457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.881474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.881812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.881829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.882181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.882201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.882534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.882552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.882755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.882771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.882971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.882990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.883277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.883294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.883698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.883717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.883910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.883931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.884268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.884285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.884664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.884684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.885011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.885028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.885237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.885255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.745 [2024-11-20 09:14:52.885605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-20 09:14:52.885623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.745 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.885809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.885828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.886039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.886056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.886395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.886413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.886643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.886661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.887000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.887021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.887333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.887350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.887674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.887691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.888014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.888033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.888348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.888367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.888698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.888716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.889070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.889087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.889473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.889493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.889875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.889894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.890241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.890261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.890639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.890657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.890978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.890997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.891330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.891348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.891666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.891683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.892049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.892068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.892418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.892437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.892780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.892798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.893136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.893153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.893499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.893517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.893856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.893875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.894090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.894109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.894493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.894512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.894733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.894752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.895088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.895106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.895445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.895465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.895804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.895822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.896150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.896175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.896527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.896546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.896622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.896641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.896824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.896842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.897173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.897191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.897593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.897612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.897944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.897962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.746 [2024-11-20 09:14:52.898286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-20 09:14:52.898303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.746 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.898679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.898696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.898770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.898783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.898952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.898970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.899199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.899217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.899540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.899557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.899901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.899920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.900235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.900257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.900596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.900613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.900977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.900996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.901309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.901327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.901538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.901555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.901909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.901925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.902134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.902150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.902607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.902625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.902971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.902988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.903299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.903318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.903669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.903689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.904021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.904039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.904403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.904421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.904774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.904793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.905135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.905153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.905494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.905511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.905845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.905863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.906210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.906229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.906612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.906630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.906947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.906964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.907300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.907317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.907666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.907683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.908018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.908036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.908379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.908397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.908763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.908780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.909124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.909142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.909494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.909512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.909911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.909930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.910299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.910318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.910618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.910635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.911008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.911026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.747 qpair failed and we were unable to recover it. 00:29:27.747 [2024-11-20 09:14:52.911383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.747 [2024-11-20 09:14:52.911402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.911753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.911771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.912112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.912133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.912449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.912467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.912806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.912823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.913184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.913203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.913522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.913540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.913757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.913776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.914122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.914141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.914493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.914516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.914905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.914926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.915279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.915298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.915643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.915661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.915972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.915989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.916329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.916348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.916542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.916559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.916886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.916903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.917234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.917252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.917516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.917534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.917740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.917756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.917976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.917992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.918224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.918241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.918596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.918612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.918950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.918970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.919338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.919357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.919696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.919715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.920052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.920069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.920420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.920438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.920790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.920807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.921141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.921170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.921507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.921525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.921884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.921903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.922239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.922258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.922599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.922617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.922962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.922980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.923307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.923326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.923637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.923656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.923998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.924018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.748 qpair failed and we were unable to recover it. 00:29:27.748 [2024-11-20 09:14:52.924270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.748 [2024-11-20 09:14:52.924287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.924654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.924674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.924893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.924910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.925307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.925326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.925681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.925699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.926044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.926063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.926378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.926396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.926611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.926627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.926949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.926967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.927305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.927323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.927659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.927676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.928022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.928045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.928375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.928393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.928596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.928612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.928804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.928824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.929144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.929170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.929510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.929529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.930114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.930141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.930527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.930547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.930882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.930902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.931236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.931254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.931600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.931617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.931947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.931966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.932281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.932299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.932635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.932655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.933181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.933206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.933551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.933569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.933904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.933921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.934276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.934296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.934503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.934520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.934899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.934917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.935277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.935298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.935635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.935654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.935990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.936009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.936232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.936253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.936586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.936604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.936942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.936961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.937310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.937329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.749 qpair failed and we were unable to recover it. 00:29:27.749 [2024-11-20 09:14:52.937669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.749 [2024-11-20 09:14:52.937687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.938015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.938032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.938389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.938409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.938735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.938752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.939059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.939076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.939390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.939407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.939758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.939777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.940004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.940023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.940433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.940451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.940655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.940671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.941030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.941047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.941404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.941424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.941755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.941772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.942101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.942123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.942475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.942494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.942835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.942854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.943186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.943205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.943543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.943563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.943894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.943911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.944238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.944255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.944591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.944610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.944944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.944963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.945308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.945328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.945659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.945679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.945880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.945899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.946228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.946247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.946627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.946643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.946991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.947010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.947221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.947241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.947460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.947477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.947825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.947844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.948175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.948193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.948560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.948577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.948893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.750 [2024-11-20 09:14:52.948913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.750 qpair failed and we were unable to recover it. 00:29:27.750 [2024-11-20 09:14:52.949235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.949252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.950880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.950928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.951173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.951194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.951542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.951560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.951768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.951784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.952131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.952148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.952466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.952485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.952821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.952838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.953179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.953200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.953581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.953599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.953918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.953936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.954230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.954247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.954636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.954655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.954984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.955000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.955319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.955337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.955684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.955703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.956006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.956024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.956403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.956422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.956759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.956777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.957118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.957139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.957494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.957511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.957864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.957884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.958092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.958114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.958452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.958473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.958797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.958815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.959150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.959179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.959485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.959503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.959819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.959837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.960172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.960190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.960542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.960562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.960893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.960910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.961227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.961244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.961601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.961618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.961955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.961974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.962311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.962330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.962659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.962678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.963016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.963035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.963383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.751 [2024-11-20 09:14:52.963403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.751 qpair failed and we were unable to recover it. 00:29:27.751 [2024-11-20 09:14:52.963719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.963735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.963954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.963970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.964292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.964310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.964653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.964672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.965010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.965028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.965338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.965355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.965701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.965719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.966055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.966073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.966415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.966437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.966765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.966782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.967125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.967144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.967464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.967481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.968902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.968949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.969305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.969328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.969687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.969705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.970051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.970070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.970400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.970418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.970735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.970755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.971086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.971103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.971441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.971462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.971795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.971812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.973034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.973075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.973425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.973448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.973770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.973789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.974074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.974091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.974429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.974447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.974781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.974800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.975120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.975137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.975465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.975486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.975802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.975821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.976182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.976202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.976541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.976559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.976790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.976807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.977140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.977164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.977610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.977627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.977968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.977986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.978309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.978328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.978694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.752 [2024-11-20 09:14:52.978713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.752 qpair failed and we were unable to recover it. 00:29:27.752 [2024-11-20 09:14:52.979045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.979062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.979294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.979313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.979655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.979674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.979994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.980010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.980334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.980352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.980698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.980716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.981052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.981071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.981462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.981480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.981706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.981724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.982075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.982092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.982401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.982422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.982738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.982755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.983112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.983131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.983473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.983490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.983804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.983821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.984043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.984061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.984352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.984372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.984724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.984743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.985058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.985076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.985446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.985465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.985810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.985828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.986168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.986187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.986517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.986534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.986872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.986891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.987292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.987310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.987658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.987677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.987903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.987921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.988284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.988303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.988659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.988677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.989009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.989027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.989340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.989358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.989721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.989739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.990073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.990090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.990467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.990485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.990829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.990847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.991148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.991172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.991525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.991543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.991897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.753 [2024-11-20 09:14:52.991916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.753 qpair failed and we were unable to recover it. 00:29:27.753 [2024-11-20 09:14:52.992213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.992231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.992592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.992609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.992955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.992972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.993213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.993230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.993515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.993533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.993867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.993885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.994119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.994136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.994454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.994471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.994646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.994662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.995012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.995031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.995396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.995414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.995747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.995766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.995977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.995998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.996388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.996406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.996739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.996756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.997100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.997117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.997478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.997496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.997821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.997839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.998135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.998152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.998483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.998502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.998815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.998832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.999191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.999210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.999637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.999655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:52.999978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:52.999998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:53.000225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:53.000246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:53.000462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:53.000479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:53.000696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:53.000712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:53.001012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:53.001029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:53.001349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:53.001367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:53.001712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:53.001728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:53.001959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:53.001975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:53.002193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:53.002212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:53.002526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:53.002543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.754 [2024-11-20 09:14:53.002884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.754 [2024-11-20 09:14:53.002903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.754 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.003195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.003213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.003561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.003578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.003917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.003934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.004209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.004226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.004573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.004590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.004913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.004932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.005118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.005137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.005345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.005362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.005715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.005732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.006065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.006083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.006421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.006440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.006784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.006803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.007018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.007036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.007235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.007252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.007600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.007618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.007958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.007976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.008205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.008223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.008511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.008528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.008843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.008865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.009205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.009223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.009574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.009592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.009935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.009952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.010276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.010293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.010638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.010656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.010964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.010983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.011221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.011238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.011596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.011614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.011917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.011935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.012280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.012300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.012638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.012656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.012892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.012909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.013316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.013335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.013674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.013693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.013988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.014005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.014361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.014380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.014715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.014731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.014943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.014959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.755 qpair failed and we were unable to recover it. 00:29:27.755 [2024-11-20 09:14:53.015338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.755 [2024-11-20 09:14:53.015356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.015728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.015747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.016051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.016069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.016273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.016291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.016615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.016634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.016987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.017005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.017360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.017377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.017718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.017736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.018081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.018100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.018428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.018446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.018790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.018807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.019081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.019098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.019426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.019443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.019762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.019780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.020117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.020134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.020351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.020368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.020695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.020712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.021061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.021080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.021424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.021442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.021783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.021802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.022151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.022176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.022509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.022531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.022846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.022863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.023211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.023230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.023476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.023492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.023731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.023747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.024074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.024092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.024428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.024446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.024626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.024646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.024875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.024894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.025249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.025267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.025657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.025675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.025868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.025887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.026216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.026234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.026565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.026583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.026957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.026974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.027313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.027331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.027688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.027705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.756 qpair failed and we were unable to recover it. 00:29:27.756 [2024-11-20 09:14:53.028106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.756 [2024-11-20 09:14:53.028124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.028462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.028480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.028706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.028723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.029043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.029059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.029401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.029419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.029763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.029783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.030126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.030144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.030422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.030439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.030785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.030802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.031134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.031152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.031527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.031545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.031748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.031764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.032122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.032140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.032499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.032517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.032867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.032885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.033248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.033266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.033507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.033523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.033865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.033882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.034233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.034252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.034629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.034647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.034977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.034994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.035336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.035353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.035701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.035720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.035927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.035953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.036171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.036189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.036557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.036575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.036919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.036938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.037205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.037224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.037574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.037593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.037933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.037950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.038179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.038196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.038538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.038555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.038892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.038910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.039236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.039254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.039628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.039646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.039888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.039905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.040198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.040215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.040552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.040570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.757 qpair failed and we were unable to recover it. 00:29:27.757 [2024-11-20 09:14:53.040902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.757 [2024-11-20 09:14:53.040919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.041244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.041262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.041614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.041633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.041967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.041984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.042236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.042254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.042594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.042611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.042956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.042975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.043327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.043345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.043694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.043713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.044045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.044063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.044274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.044291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.044633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.044650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.044990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.045010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.045343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.045360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.045711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.045729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.046062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.046080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.046404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.046421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.046645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.046662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.046929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.046946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.047280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.047297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.047637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.047655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.047972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.047991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.048328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.048345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.048686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.048703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.049038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.049058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.049374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.049396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.049772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.049790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.050128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.050145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.050547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.050566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.050899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.050918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.051137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.051154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.051507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.051525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.051861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.051878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.052246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.052265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.052612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.052630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.052976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.052994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.053328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.053347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.053676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.053695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.758 qpair failed and we were unable to recover it. 00:29:27.758 [2024-11-20 09:14:53.054031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.758 [2024-11-20 09:14:53.054048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.054203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.054222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.054580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.054597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.054915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.054932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.055278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.055295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.055641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.055661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.055999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.056017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.056253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.056270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.056625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.056642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.056966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.056985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.057234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.057252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.057590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.057610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.057947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.057965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.058307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.058326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.058660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.058678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.058990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.059009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.059331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.059349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.059701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.059720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.060053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.060072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.060400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.060417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.060758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.060777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.061108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.061126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.061459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.061478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.061824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.061842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.062177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.062197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.062538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.062556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.062901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.062921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.063238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.063258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.063609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.063628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.063970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.063988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.064325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.064344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.064722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.064740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.065080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.065100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.065411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.759 [2024-11-20 09:14:53.065429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.759 qpair failed and we were unable to recover it. 00:29:27.759 [2024-11-20 09:14:53.065761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.065780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.066109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.066126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.066376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.066393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.066601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.066619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.066959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.066976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.067318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.067335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.067678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.067697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.068034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.068052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.068391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.068408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.068725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.068744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.069081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.069099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.069440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.069458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.069782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.069801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.069998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.070018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.070321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.070338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.070672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.070691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.070977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.070995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.071330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.071349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.071687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.071704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.072085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.072103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.072428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.072446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.072628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.072648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.072999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.073019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.073336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.073354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.073709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.073727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.074060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.074078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.074418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.074437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.074761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.074779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.075108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.075128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.075448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.075466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.075798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.075817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.076165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.076184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.076533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.076551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.076883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.076905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.077234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.077253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.077601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.077620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.077916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.077934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.760 [2024-11-20 09:14:53.078232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.760 [2024-11-20 09:14:53.078249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.760 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.078609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.078626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.078959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.078979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.079310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.079328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.079655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.079674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.080010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.080026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.080349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.080366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.080710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.080727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.080929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.080947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.081238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.081257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.081609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.081628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.081961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.081978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.082306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.082325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.082673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.082690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.083029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.083048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.083390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.083408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.083721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.083740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.083935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.083956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.084308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.084326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.084654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.084674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.084993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.085010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.085340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.085360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.085708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.085726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.086063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.086082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.086411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.086429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.086766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.086785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.087120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.087137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.087457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.087477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.087820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.087837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.088174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.088192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.088532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.088552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.088895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.088914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.089232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.089250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.089586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.089605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.089947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.089966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.090308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.090327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.090661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.090682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.091012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.761 [2024-11-20 09:14:53.091031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.761 qpair failed and we were unable to recover it. 00:29:27.761 [2024-11-20 09:14:53.091334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.091351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.091716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.091735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.092073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.092090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.092424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.092442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.092776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.092794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.093018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.093036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.093336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.093353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.093702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.093720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.094048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.094064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.094411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.094431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.094751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.094769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.095098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.095117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.095336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.095353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.095685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.095704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.096036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.096053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.096388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.096407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.096624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.096643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.096986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.097005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.097375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.097393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.097733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.097752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.098090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.098106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.098425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.098444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.098779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.098796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.099131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.099150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.099493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.099510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.099888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.099907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.100229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.100246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.100581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.100599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.100929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.100946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.101360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.101379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.101713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.101731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.102062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.102079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.102418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.102438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.102776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.102794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.103114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.103133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.103470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.103489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.103827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.103846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.104190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.104208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.104499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.104523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.104852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.104869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.105198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.762 [2024-11-20 09:14:53.105218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.762 qpair failed and we were unable to recover it. 00:29:27.762 [2024-11-20 09:14:53.105570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.105587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.105902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.105921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.106257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.106275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.106613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.106632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.106967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.106986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.107169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.107190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.107567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.107584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.107780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.107798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.108138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.108155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.108407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.108424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.108833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.108850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.109189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.109209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.109521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.109539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.109865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.109882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.110262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.110279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.110623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.110642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.110978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.110996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.111309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.111326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.111720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.111737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.112075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.112095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.112436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.112453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.112784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.112804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.113144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.113168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.113396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.113412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.113804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.113821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.114037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.114053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.114373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.114391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.114570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.114590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.114934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.114951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.115277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.115294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.115630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.115649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.115975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.115992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.116319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.116337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.116670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.116686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.117028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.117047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.117281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.117298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.117579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.117595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.763 qpair failed and we were unable to recover it. 00:29:27.763 [2024-11-20 09:14:53.117917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.763 [2024-11-20 09:14:53.117938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.118266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.118285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.118658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.118675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.119011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.119028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.119342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.119359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.119715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.119734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.120088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.120106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.120433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.120451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.120802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.120820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.121174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.121192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.121561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.121579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.121912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.121929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.122334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.122352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.122702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.122721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.123088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.123105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.123437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.123454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.123769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.123785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.124138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.124156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.124495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.124514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.124831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.124850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.125235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.125253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.125595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.125614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.125969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.125987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.126321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.126339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.126654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.126673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.127008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.127025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.127232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.127251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.127611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.127630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.127979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.128000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.128335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.128353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.128743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.128763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.129108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.129126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.129452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.129471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.129696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.129714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.130013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.130030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.130279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.130298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.130622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.130641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.130981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.130999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.131381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.131400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.131692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.131709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.764 [2024-11-20 09:14:53.132087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.764 [2024-11-20 09:14:53.132109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.764 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.132435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.132454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.132793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.132810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.133154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.133182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.133499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.133515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.133871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.133888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.134119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.134137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.134467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.134485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.134802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.134819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.135174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.135193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.135527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.135545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.135868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.135886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.136201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.136220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.136308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.136323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.136626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.136642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.136984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.137000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.137340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.137358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.137684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.137701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.138047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.138067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.138414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.138433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.138776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.138794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.138921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.138939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.139265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.139283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.139624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.139644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.139982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.140000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.140341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.140358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.140742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.140760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.141101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.141125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.141475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.141494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.141829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.141847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.142186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.142203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.142543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.142560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.142897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.142914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.143315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.143332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.143674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.143693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.144011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.144030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.144378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.144398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.144724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.144743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.145120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.145139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.145444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.145461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.145804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.145826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.765 [2024-11-20 09:14:53.146173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.765 [2024-11-20 09:14:53.146193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.765 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.146533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.146553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.146887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.146904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.147222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.147240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.147591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.147609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.147946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.147964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.148309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.148328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.148660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.148681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.149011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.149028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.149308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.149327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.149667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.149685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.150023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.150041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.150290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.150310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.150611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.150630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.150865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.150881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.151215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.151233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.151596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.151613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.151820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.151838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.152188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.152207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.152578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.152595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.152810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.152828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.153168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.153186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.153403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.153421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.153798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.153816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.154132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.154150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.154469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.154489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.154808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.154829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.155169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.155188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.155493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.155511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.155732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.155751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.155948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.155966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.156300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.156320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.156657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.156676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.156871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.156890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.157116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.157133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.157475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.157494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.157836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.157853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.158242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.158260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.158604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.158622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.159028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.159045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.766 [2024-11-20 09:14:53.159289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.766 [2024-11-20 09:14:53.159307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.766 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.159552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.159571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.159924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.159943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.160285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.160306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.160655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.160672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.161009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.161029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.161378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.161395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.161715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.161733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.162040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.162058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.162254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.162274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.162537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.162554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.162850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.162868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.163185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.163204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.163509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.163526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.163870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.163888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.164218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.164236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.164558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.164577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.164915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.164932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.165152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.165177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.165528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.165546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.165863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.165879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.166220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.166237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.166584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.166602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.166943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.166959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.167226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.167243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.167585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.167603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.167956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.167979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.168317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.168334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.168670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.168688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.169021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.169038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.169398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.169417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.169726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.169743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.170062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.170081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.170441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.170460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.170789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.170805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.171155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.171181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.171494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.171512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.171859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.171877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.767 qpair failed and we were unable to recover it. 00:29:27.767 [2024-11-20 09:14:53.172112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.767 [2024-11-20 09:14:53.172129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.172455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.172474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.172855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.172871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.173210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.173228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.173566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.173583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.173923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.173942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.174272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.174289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.174633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.174651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.174888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.174905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.175229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.175246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.175581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.175598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.175935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.175954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.176297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.176315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.176656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.176674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.176897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.176914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.177266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.177284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.177631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.177647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.177876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.177892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.178270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.178288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.178628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.178646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.178983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.178999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.179193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.179210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.179502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.179520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.179856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.179874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.180214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.180232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.180575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.180593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.180941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.180959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.181170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.181189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.181484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.181503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.181851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.181867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.182082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.768 [2024-11-20 09:14:53.182099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.768 qpair failed and we were unable to recover it. 00:29:27.768 [2024-11-20 09:14:53.182434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.182453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.182793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.182810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.183154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.183178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.183506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.183524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.183867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.183883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.184237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.184256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.184597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.184614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.184933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.184950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.185288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.185306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.185663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.185681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.186014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.186030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.186382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.186402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.186739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.186757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.187086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.187104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.187327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.187347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.187748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.187765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.188060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.188078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.188379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.188397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.188737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.188756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.189074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.189091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.189438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.189455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.189779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.189798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.190122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.190140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.190361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.190379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.190708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.190727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.191067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.191085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.191291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.191312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.191638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.191657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.192032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.192050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.192420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.192440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.192771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.192792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.193134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.193152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.193360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.193380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.193718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.769 [2024-11-20 09:14:53.193736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.769 qpair failed and we were unable to recover it. 00:29:27.769 [2024-11-20 09:14:53.194070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.194087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.194420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.194440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.194779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.194797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.195079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.195101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.195419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.195440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.195766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.195785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.196118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.196136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.196443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.196462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.196798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.196815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.197149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.197175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.197529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.197548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.197889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.197907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.198124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.198142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.198301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.198320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.198634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.198651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.198985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.199003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.199335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.199353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.199672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.199690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.200020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.200036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.200387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.200407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.200599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.200616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.200969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.200986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.201335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.201353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.201749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.201767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.201971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.201988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.202314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.202333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.202551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.202571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.202911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.202929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.203296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.203314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.203620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.203637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.204020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.204037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.204347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.204365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.204720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.204737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.205099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.205117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.205452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.205471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.205815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.205834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.206176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.770 [2024-11-20 09:14:53.206195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.770 qpair failed and we were unable to recover it. 00:29:27.770 [2024-11-20 09:14:53.206537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.206555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.206894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.206912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.207223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.207241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.207581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.207601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.207931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.207949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.208250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.208267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.208603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.208623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.208958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.208977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.209296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.209314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.209652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.209672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.210008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.210025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.210329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.210346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.210685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.210703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.210924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.210940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.211283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.211301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.211648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.211667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.211982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.211999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.212184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.212205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.212515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.212532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.212868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.212885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.213237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.213254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.213591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.213609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.213953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.213970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.214283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.214301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.214615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.214632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.214967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.214986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.215333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.215352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.215693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.215712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.216016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.216033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.216349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.216366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.216716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.216734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.217073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.217091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.217395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.217412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.217755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.217774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.218113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.218130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.218462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.218479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.218852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.218869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.771 [2024-11-20 09:14:53.219210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.771 [2024-11-20 09:14:53.219229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.771 qpair failed and we were unable to recover it. 00:29:27.772 [2024-11-20 09:14:53.219566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.772 [2024-11-20 09:14:53.219584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.772 qpair failed and we were unable to recover it. 00:29:27.772 [2024-11-20 09:14:53.219919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.772 [2024-11-20 09:14:53.219939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.772 qpair failed and we were unable to recover it. 00:29:27.772 [2024-11-20 09:14:53.220274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.772 [2024-11-20 09:14:53.220291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.772 qpair failed and we were unable to recover it. 00:29:27.772 [2024-11-20 09:14:53.220638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.772 [2024-11-20 09:14:53.220657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.772 qpair failed and we were unable to recover it. 00:29:27.772 [2024-11-20 09:14:53.220977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.772 [2024-11-20 09:14:53.220994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.772 qpair failed and we were unable to recover it. 00:29:27.772 [2024-11-20 09:14:53.221318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.772 [2024-11-20 09:14:53.221336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:27.772 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.221669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.221688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.221920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.221937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.222296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.222320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.222666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.222684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.223024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.223040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.223382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.223401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.223738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.223755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.224110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.224128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.224459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.224477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.224810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.224828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.225242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.225260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.225603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.225623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.225940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.225956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.226298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.226318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.226651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.226668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.227001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.227020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.227342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.227360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.227693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.227711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.228050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.228070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.228389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.228406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.228734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.228750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.229097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.229116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.229434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.229452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.229791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.229809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.230145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.230172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.230509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.230528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.230864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.230883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.231221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.231239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.231629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.048 [2024-11-20 09:14:53.231646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.048 qpair failed and we were unable to recover it. 00:29:28.048 [2024-11-20 09:14:53.231981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.231999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.232340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.232357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.232695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.232715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.233049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.233066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.233405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.233424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.233758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.233775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.234183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.234201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.234497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.234514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.234891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.234907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.235230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.235247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.235591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.235607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.235923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.235942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.236277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.236295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.236636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.236658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.236989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.237006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.237363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.237382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.237711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.237731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.238076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.238094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.238310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.238329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.238653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.238672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.238903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.238921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.239240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.239257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.239598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.239617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.239820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.239839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.240191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.240211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.240546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.240563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.240789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.240805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.241175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.241193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.241531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.241549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.241887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.241905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.242213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.242230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.242566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.242583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.242921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.242939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.243277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.243294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.243638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.243658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.243994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.244010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.244345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.244363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.049 qpair failed and we were unable to recover it. 00:29:28.049 [2024-11-20 09:14:53.244704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.049 [2024-11-20 09:14:53.244720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.245059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.245077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.245395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.245412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.245731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.245748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.245962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.245979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.246249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.246265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.246599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.246616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.246953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.246970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.247311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.247330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.247654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.247672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.248012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.248030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.248395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.248412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.248746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.248765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.249102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.249120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.249475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.249494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.249825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.249843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.250185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.250206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.250549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.250565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.250878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.250895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.251227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.251245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.251602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.251619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.251938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.251955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.252311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.252328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.252556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.252572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.252910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.252928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.253273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.253291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.253607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.253624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.253966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.253985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.254317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.254335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.254664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.254681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.255014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.255030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.255383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.255402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.255750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.255767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.256107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.256125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.256477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.256495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.256851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.256869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.257209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.257225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.257555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.050 [2024-11-20 09:14:53.257572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.050 qpair failed and we were unable to recover it. 00:29:28.050 [2024-11-20 09:14:53.257920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.257936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.258321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.258340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.258685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.258703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.259043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.259061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.259389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.259407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.259785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.259804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.260132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.260148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.260493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.260511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.260848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.260865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.261209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.261226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.261619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.261635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.261967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.261985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.262308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.262326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.262670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.262688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.263026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.263042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.263388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.263406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.263742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.263759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.264113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.264131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.264459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.264480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.264707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.264723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.265053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.265070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.265406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.265423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.265762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.265779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.265988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.266006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.266291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.266308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.266641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.266657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.266993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.267010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.267339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.267356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.267676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.267693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.268030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.268049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.268396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.268414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.268750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.268769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.269100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.269117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.269504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.269524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.269752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.269770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.270109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.270127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.270441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.270458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.051 [2024-11-20 09:14:53.270792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.051 [2024-11-20 09:14:53.270811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.051 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.271141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.271165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.271502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.271521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.271851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.271869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.272233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.272251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.272591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.272610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.272944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.272961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.273291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.273309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.273639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.273656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.274002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.274018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.274333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.274351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.274683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.274699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.275029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.275045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.275349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.275367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.275709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.275725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.276040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.276057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.276397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.276415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.276789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.276807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.277023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.277041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.277356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.277374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.277721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.277740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.278062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.278082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.278432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.278449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.278652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.278670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.278999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.279015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.279340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.279358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.279706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.279725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.280062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.280079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.280426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.280445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.280785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.280804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.281136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.281154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.281504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.281521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.052 [2024-11-20 09:14:53.281894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.052 [2024-11-20 09:14:53.281912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.052 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.282131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.282150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.282466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.282484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.282813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.282831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.283178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.283197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.283529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.283548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.283884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.283902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.284235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.284254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.284591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.284610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.284946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.284962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.285310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.285337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.285664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.285681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.286016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.286034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.286361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.286378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.286788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.286806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.287146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.287170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.287523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.287541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.287881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.287900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.288238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.288256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.288581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.288600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.288943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.288959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.289315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.289332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.289671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.289688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.290051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.290070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.290412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.290430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.290807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.290826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.291166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.291184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.291499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.291516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.291861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.291880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.292248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.292270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.292618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.292637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.292966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.292983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.293322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.293340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.293676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.293693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.294025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.294044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.294355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.294374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.294697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.294714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.295059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.295075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-11-20 09:14:53.295413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.053 [2024-11-20 09:14:53.295430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.295776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.295793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.296132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.296148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.296378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.296395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.296720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.296739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.297067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.297084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.297421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.297438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.297773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.297790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.297970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.297988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.298327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.298345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.298693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.298712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.299044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.299061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.299404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.299421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.299735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.299751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.300082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.300099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.300427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.300445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.300778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.300796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.301107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.301125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.301443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.301463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.301805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.301822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.302167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.302186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.302493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.302510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.302823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.302839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.303216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.303234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.303601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.303620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.303954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.303971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.304302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.304319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.304662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.304680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.305013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.305032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.305391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.305408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.305781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.305799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.306128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.306148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.306461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.306480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.306811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.306827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.307172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.307189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.307520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.307539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.307876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.307892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.308233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.054 [2024-11-20 09:14:53.308251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-11-20 09:14:53.308587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.308604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.308940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.308958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.309298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.309315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.309506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.309523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.309873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.309891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.310235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.310252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.310586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.310602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.310944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.310963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.311307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.311325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.311666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.311683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.312106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.312123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.312462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.312482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.312817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.312834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.313177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.313194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.313532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.313549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.313863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.313879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.314216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.314233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.314574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.314593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.314915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.314931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.315238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.315254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.315618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.315636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.315969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.315988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.316321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.316339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.316661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.316678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.317016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.317032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.317338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.317356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.317713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.317730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.318064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.318080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.318283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.318302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.318627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.318644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.318980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.318997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.319337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.319357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.319720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.319737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.320079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.320098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.320437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.320455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.320800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.320818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.321164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.321184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-11-20 09:14:53.321519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.055 [2024-11-20 09:14:53.321535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.321872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.321890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.322202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.322221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.322562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.322580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.322963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.322981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.323312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.323332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.323716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.323732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.324066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.324083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.324419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.324437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.324772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.324791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.325123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.325140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.325533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.325551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.325867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.325883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.326234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.326252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.326597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.326615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.326815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.326833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.327130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.327147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.327507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.327525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.327862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.327881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.328219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.328237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.328576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.328594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.328932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.328949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.329263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.329281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.329616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.329638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.329970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.329987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.330325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.330344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.330676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.330692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.331025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.331044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.331256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.331275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.331600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.331619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.331954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.331971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.332314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.332334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.332661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.332679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.333011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.333027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.333335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.333352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.333681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.333699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.334041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.334058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.334396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.334415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.056 [2024-11-20 09:14:53.334742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.056 [2024-11-20 09:14:53.334758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.056 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.335103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.335122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.335501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.335519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.335739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.335755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.336107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.336125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.336456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.336476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.336812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.336831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.337172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.337191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.337505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.337521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.337865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.337884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.338218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.338235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.338534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.338550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.338878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.338895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.339232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.339251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.339634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.339651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.339998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.340018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.340326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.340343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.340680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.340698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.341042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.341058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.341376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.341392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.341707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.341724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.342043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.342060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.342397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.342416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.342752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.342771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.343108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.343126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.343469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.343492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.343829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.343846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.344188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.344206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.344538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.344556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.344895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.344912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.345254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.345273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.345602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.345620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.345935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.057 [2024-11-20 09:14:53.345951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.057 qpair failed and we were unable to recover it. 00:29:28.057 [2024-11-20 09:14:53.346289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.346306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.346650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.346669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.347005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.347022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.347412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.347430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.347761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.347779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.348120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.348138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.348474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.348492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.348825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.348842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.349189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.349208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.349538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.349555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.349896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.349912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.350229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.350246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.350589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.350605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.350946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.350965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.351304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.351321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.351640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.351657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.352004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.352020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.352343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.352361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.352705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.352722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.353068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.353087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.353435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.353454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.353790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.353809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.354152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.354189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.354553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.354571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.354904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.354922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.355235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.355253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.355581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.355600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.355963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.355980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.356183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.356202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.356563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.356582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.356915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.356931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.357281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.357299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.357694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.357715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.358052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.358068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.358379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.358397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.358786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.358803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.359138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.359157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.058 [2024-11-20 09:14:53.359362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.058 [2024-11-20 09:14:53.359382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.058 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.359733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.359750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.360062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.360079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.360396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.360414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.360746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.360763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.361095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.361112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.361433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.361451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.361783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.361800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.362142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.362165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.362521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.362540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.362878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.362895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.363233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.363251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.363575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.363593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.363944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.363963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.364289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.364307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.364652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.364670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.365008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.365026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.365337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.365355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.365730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.365748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.366036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.366053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.366375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.366394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.366619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.366636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.366993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.367009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.367336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.367353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.367717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.367736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.368082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.368099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.368459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.368477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.368806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.368825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.369150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.369174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.369541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.369559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.369772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.369790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.370127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.370143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.370367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.370388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.370730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.370747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.371088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.371104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.371466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.371487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.371725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.371742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.372102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.372119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.059 qpair failed and we were unable to recover it. 00:29:28.059 [2024-11-20 09:14:53.372440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.059 [2024-11-20 09:14:53.372458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.372803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.372820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.373139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.373156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.373507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.373524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.373852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.373868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.374202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.374219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.374549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.374566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.374906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.374924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.375235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.375251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.375597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.375614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.375964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.375982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.376325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.376342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.376665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.376681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.377025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.377043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.377373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.377391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.377719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.377737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.378072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.378090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.378432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.378450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.378673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.378690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.379035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.379054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.379409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.379427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.379776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.379792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.379995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.380012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.380339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.380356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.380680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.380697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.381039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.381056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.381278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.381297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.381648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.381665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.381993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.382010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.382334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.382352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.382697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.382714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.383071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.383088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.383390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.383408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.383743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.383759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.384104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.384121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.384444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.384462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.384778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.384795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.385130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.060 [2024-11-20 09:14:53.385151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.060 qpair failed and we were unable to recover it. 00:29:28.060 [2024-11-20 09:14:53.385517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.385537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.385875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.385892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.386118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.386135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.386495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.386513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.386868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.386887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.387217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.387235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.387549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.387566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.387912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.387929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.388155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.388186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.388398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.388415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.388729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.388745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.389079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.389098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.389399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.389417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.389757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.389777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.390124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.390141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.390476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.390493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.390822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.390838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.391082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.391099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.391453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.391473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.391648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.391666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.391999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.392016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.392378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.392397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.392729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.392745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.393084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.393102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.393507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.393525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.393860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.393877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.394224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.394242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.394586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.394603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.394940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.394959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.395298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.395316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.395651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.395671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.395849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.395866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.396206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.396225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.396420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.396437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.396834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.396851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.397184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.061 [2024-11-20 09:14:53.397201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.061 qpair failed and we were unable to recover it. 00:29:28.061 [2024-11-20 09:14:53.397534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.397551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.397865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.397882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.398236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.398254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.398582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.398602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.398981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.399000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.399308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.399325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.399548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.399565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.399923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.399940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.400164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.400182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.400459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.400478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.400790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.400808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.401187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.401205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.401546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.401563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.401908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.401926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.402150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.402174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.402522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.402541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.402776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.402793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.403137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.403155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.403382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.403399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.403636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.403655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.403973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.403991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.404286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.404303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.404681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.404700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.405025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.405043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.405265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.405283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.405659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.405676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.406022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.406038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.406359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.406377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.406721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.406738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.407072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.407089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.407453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.407471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.407788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.407804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.408140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.408165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.408465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.408484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.408851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.062 [2024-11-20 09:14:53.408871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.062 qpair failed and we were unable to recover it. 00:29:28.062 [2024-11-20 09:14:53.409217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.409250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.409453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.409472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.409793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.409812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.410191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.410209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.410536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.410554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.410885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.410902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.411234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.411252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.411560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.411577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.411927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.411949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.412296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.412313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.412667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.412686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.413022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.413039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.413449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.413468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.413759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.413778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.414117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.414134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.414451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.414469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.414799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.414815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.415165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.415186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.415531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.415550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.415886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.415903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.416236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.416254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.416638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.416654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.416989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.417006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.417350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.417367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.417703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.417723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.417940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.417957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.418299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.418317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.418657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.418674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.419022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.419040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.419386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.419404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.419608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.419624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.419926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.419944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.420286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.420303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.420673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.420690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.420982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.421000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.421318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.421336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.421692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.421711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.063 [2024-11-20 09:14:53.422052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.063 [2024-11-20 09:14:53.422070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.063 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.422399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.422417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.422591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.422607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.422953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.422969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.423312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.423329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.423671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.423688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.424020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.424038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.424350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.424368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.424721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.424737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.424942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.424960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.425331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.425350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.425579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.425599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.425914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.425932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.426147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.426171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.426535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.426552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.426890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.426908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.427250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.427269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.427609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.427628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.427947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.427964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.428309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.428329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.428670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.428687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.429026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.429043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.429258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.429276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.429631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.429647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.429980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.429996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.430317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.430335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.430544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.430561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.430898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.430915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.431255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.431272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.431564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.431581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.431916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.431932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.432283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.432301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.432498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.432515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.432868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.432885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.433067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.433085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.433280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.433298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.433639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.433656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.433967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.433983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.064 [2024-11-20 09:14:53.434307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.064 [2024-11-20 09:14:53.434325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.064 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.434673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.434691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.435035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.435053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.435347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.435365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.435718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.435736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.436061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.436077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.436510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.436528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.436872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.436891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.437225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.437243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.437589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.437607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.437823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.437842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.438172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.438191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.438584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.438601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.438941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.438964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.439327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.439346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.439680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.439698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.440014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.440031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.440238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.440255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.440596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.440613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.440938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.440954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.441290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.441308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.441639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.441657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.441996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.442012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.442342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.442361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.442672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.442689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.442895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.442912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.443234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.443253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.443599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.443618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.443929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.443945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.444286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.444304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.444658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.444675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.445013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.445031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.445247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.445265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.445487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.445505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.445849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.445868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.446209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.446227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.446543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.446561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.446883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.065 [2024-11-20 09:14:53.446900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.065 qpair failed and we were unable to recover it. 00:29:28.065 [2024-11-20 09:14:53.447234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.447252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.447591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.447609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.447933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.447950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.448291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.448308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.448667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.448685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.449022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.449039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.449242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.449262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.449531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.449550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.449891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.449909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.450233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.450250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.450601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.450618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.450956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.450975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.451313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.451330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.451612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.451629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.451946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.451963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.452374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.452395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.452735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.452754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.453084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.453101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.453419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.453436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.453775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.453791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.454114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.454133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.454437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.454455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.454769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.454786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.455117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.455135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.455447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.455464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.455790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.455806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.456119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.456137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.456477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.456495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.456831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.456849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.457075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.457093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.457321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.457342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.457570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.066 [2024-11-20 09:14:53.457588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.066 qpair failed and we were unable to recover it. 00:29:28.066 [2024-11-20 09:14:53.457918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.457936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.458277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.458296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.458499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.458516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.458845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.458864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.459216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.459233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.459569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.459588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.459900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.459916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.460232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.460249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.460471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.460488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.460839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.460856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.461197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.461215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.461551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.461568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.461910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.461926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.462273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.462291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.462634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.462651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.462989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.463008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.463341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.463358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.463749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.463766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.464086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.464104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.464519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.464536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.464767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.464783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.465135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.465152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.465461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.465478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.465809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.465830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.466168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.466186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.466520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.466537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.466743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.466762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.467096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.467113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.467439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.467456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.467740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.467757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.468081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.468099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.468429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.468448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.468787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.468806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.469142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.469170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.469478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.469495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.469829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.469846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.470167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.067 [2024-11-20 09:14:53.470185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.067 qpair failed and we were unable to recover it. 00:29:28.067 [2024-11-20 09:14:53.470517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.470535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.470872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.470889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.471223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.471243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.471601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.471618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.471955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.471973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.472287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.472306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.472668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.472686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.473015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.473031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.473389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.473409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.473768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.473785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.474124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.474143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.474376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.474393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.474733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.474752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.475101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.475118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.475346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.475363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.475704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.475721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.476062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.476081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.476417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.476435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.476776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.476795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.477125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.477142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.477478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.477497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.477859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.477877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.478216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.478233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.478560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.478577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.478954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.478970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.479267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.479286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.479599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.479620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.479955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.479974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.480184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.480202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.480567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.480585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.480923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.480940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.481281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.481298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.481630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.481646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.481960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.481977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.482316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.482333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.482671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.482690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.483025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.483042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.483383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.068 [2024-11-20 09:14:53.483403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.068 qpair failed and we were unable to recover it. 00:29:28.068 [2024-11-20 09:14:53.483731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.483748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.483926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.483944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.484293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.484312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.484657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.484673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.485024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.485044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.485382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.485399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.485739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.485758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.486095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.486111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.486442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.486459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.486801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.486817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.487173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.487191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.487529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.487545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.487881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.487899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.488228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.488246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.488582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.488601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.488937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.488955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.489189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.489206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.489570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.489586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.489926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.489942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.490282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.490300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.490633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.490653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.490982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.491000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.491338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.491356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.491687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.491704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.492044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.492063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.492285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.492304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.492641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.492659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.492858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.492876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.493198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.493216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.493575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.493592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.493960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.493978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.494318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.494336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.494674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.494693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.495022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.495040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.495384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.495403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.495765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.495782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.496156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.496182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.069 [2024-11-20 09:14:53.496530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.069 [2024-11-20 09:14:53.496547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.069 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.496880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.496897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.497218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.497236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.497568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.497585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.497923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.497940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.498277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.498294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.498635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.498651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.498987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.499006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.499419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.499437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.499659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.499675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.500043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.500060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.500396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.500413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.500756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.500773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.501108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.501127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.501445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.501463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.501796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.501813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.502150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.502175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.502524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.502542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.502864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.502886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.503221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.503239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.503579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.503595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.503933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.503951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.504274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.504292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.504659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.504677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.505010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.505027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.505341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.505358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.505679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.505695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.506029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.506046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.506390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.506407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.506744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.506763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.507100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.507118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.507452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.507470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.507812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.507831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.508170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.508190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.508564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.508582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.508920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.508938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.509269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.509287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.509619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.509638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.070 qpair failed and we were unable to recover it. 00:29:28.070 [2024-11-20 09:14:53.509953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.070 [2024-11-20 09:14:53.509969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.510307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.510324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.510675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.510692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.511057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.511075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.511397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.511414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.511748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.511764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.512094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.512110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.512458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.512475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.512824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.512844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.513180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.513209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.513453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.513469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.513798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.513815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.514170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.514190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.514529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.514546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.514883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.514899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.515245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.515263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.515643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.515661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.515954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.515971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.516311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.516328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.516529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.516547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.516881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.516906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.517238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.517255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.517594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.517611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.517950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.517969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.518308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.518325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.518737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.518754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.519093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.519112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.519438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.519455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.519773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.519790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.520127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.520144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.520473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.520490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.520826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.520842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.521178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.521196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.521530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.071 [2024-11-20 09:14:53.521548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.071 qpair failed and we were unable to recover it. 00:29:28.071 [2024-11-20 09:14:53.521891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.521908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.522257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.522274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.522589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.522606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.522945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.522961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.523315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.523332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.523662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.523679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.524021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.524040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.524376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.524395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.524735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.524754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.525089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.525106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.525421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.525440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.525779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.525797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.526134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.526153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.526498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.526517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.526840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.526858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.527196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.527214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.527586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.527603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.527938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.527958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.528292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.528309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.528650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.528669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.529001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.529018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.529340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.529357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.529705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.529722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.530050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.530069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.530407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.530425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.530657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.530673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.530993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.531014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.531345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.531363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.531709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.531725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.532059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.532078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.532400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.532417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.532757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.532776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.533146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.533169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.533527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.533547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.533864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.533882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.534245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.534262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.534661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.534678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.072 [2024-11-20 09:14:53.535010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.072 [2024-11-20 09:14:53.535028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.072 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.535334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.535351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.535679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.535699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.536042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.536059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.536394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.536412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.536731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.536748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.537087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.537105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.537441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.537459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.537789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.537809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.538138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.538154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.538494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.538511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.538845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.538862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.539211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.539232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.539566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.539583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.539921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.539937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.540225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.540242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.540586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.540605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.540824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.540840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.541211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.541230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.541564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.541582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.541913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.541932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.542259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.542276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.542609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.542628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.542840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.542859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.543210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.543227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.543572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.543589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.543972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.543990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.544210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.544228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.544565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.544583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.544898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.544919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.545257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.545274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.545613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.545632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.545977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.545993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.546307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.546325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.546670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.546687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.547010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.547029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.547287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.547305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.547670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.073 [2024-11-20 09:14:53.547690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.073 qpair failed and we were unable to recover it. 00:29:28.073 [2024-11-20 09:14:53.548028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.548045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.548350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.548368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.548700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.548717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.549035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.549051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.549403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.549421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.549764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.549784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.550151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.550173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.550482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.550498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.550879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.550895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.551226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.551243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.551585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.551602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.551920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.551936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.552283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.552300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.552510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.552529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.552884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.552902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.553242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.553261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.553611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.553629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.553961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.553981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.554331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.554349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.554561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.554577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.554926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.554943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.555279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.555298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.555617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.555634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.555967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.555988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.556202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.556221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.556445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.556463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.556803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.556820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.557175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.557193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.557514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.557532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.557917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.557934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.558275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.558295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.074 [2024-11-20 09:14:53.558638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.074 [2024-11-20 09:14:53.558659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.074 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.558996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.559015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.559376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.559397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.559743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.559762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.560093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.560111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.560520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.560538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.560872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.560891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.561215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.561234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.561598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.561615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.561948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.561965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.562326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.562346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.562676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.562693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.563052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.563069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.563405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.563424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.563755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.563772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.564106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.564125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.564470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.564489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.564840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.564859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.565191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.565210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.565619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.565637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.565974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.565992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.566328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.348 [2024-11-20 09:14:53.566347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.348 qpair failed and we were unable to recover it. 00:29:28.348 [2024-11-20 09:14:53.566675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.566694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.567023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.567041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.567379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.567398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.567709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.567728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.568061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.568079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.568423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.568441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.568753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.568773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.569118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.569136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.569491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.569510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.569850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.569869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.570206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.570226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.570563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.570581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.570914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.570933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.571313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.571330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.571671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.571687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.572022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.572039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.572386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.572406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.572716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.572733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.573048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.573068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.573403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.573421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.573727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.573744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.573970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.573988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.574322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.574339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.574675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.574691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.575029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.575049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.575388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.575406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.575788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.575806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.576142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.576165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.576483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.576499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.576836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.576853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.577180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.577198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.577530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.577547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.577890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.577908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.578247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.578265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.578640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.578660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.578996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.579014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.579354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.579372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.349 [2024-11-20 09:14:53.579724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.349 [2024-11-20 09:14:53.579743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.349 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.580058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.580075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.580456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.580474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.580782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.580799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.581133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.581151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.581482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.581501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.581836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.581853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.582181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.582199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.582538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.582555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.582892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.582910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.583140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.583165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.583497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.583513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.583844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.583860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.584177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.584196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.584529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.584546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.584891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.584910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.585229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.585247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.585586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.585604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.585937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.585954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.586293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.586310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.586653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.586669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.586901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.586926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.587246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.587264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.587606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.587624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.587839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.587858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.588176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.588195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.588517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.588534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.588883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.588902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.589238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.589255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.589646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.589664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.589996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.590015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.590342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.590359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.590695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.590715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.591045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.591062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.591402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.591420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.591753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.591770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.592138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.592155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.592484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.592502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.350 qpair failed and we were unable to recover it. 00:29:28.350 [2024-11-20 09:14:53.592836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.350 [2024-11-20 09:14:53.592855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.593183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.593200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.593541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.593560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.593893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.593909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.594232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.594249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.594590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.594606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.594945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.594964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.595299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.595317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.595664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.595683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.596016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.596033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.596389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.596408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.596615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.596632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.596909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.596926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.597251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.597268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.597598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.597616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.597932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.597950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.598288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.598308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.598643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.598660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.598994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.599010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.599328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.599345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.599695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.599713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.600044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.600060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.600436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.600455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.600792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.600813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.601142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.601167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.601503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.601519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.601862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.601881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.602080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.602100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.602299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.602317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.602642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.602659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.603001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.603018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.603253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.603271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.603542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.603558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.603913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.603930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.604273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.604291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.604629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.604646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.604978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.604996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.605210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.351 [2024-11-20 09:14:53.605229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.351 qpair failed and we were unable to recover it. 00:29:28.351 [2024-11-20 09:14:53.605571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.605591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.605928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.605946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.606284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.606302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.606643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.606660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.606998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.607015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.607330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.607348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.607674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.607693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.608038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.608055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.608388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.608407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.608741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.608759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.609140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.609164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.609516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.609535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.609950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.609968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.610344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.610364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.610701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.610719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.611055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.611073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.611396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.611415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.611730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.611749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.612079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.612097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.612422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.612441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.612831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.612849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.613182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.613200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.613587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.613604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.613947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.613965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.614311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.614330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.614634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.614655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.614998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.615017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.615244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.615264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.615488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.615506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.615877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.615894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.616226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.616245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.616550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.616569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.616903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.616922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.617262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.617280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.352 [2024-11-20 09:14:53.617597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.352 [2024-11-20 09:14:53.617616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.352 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.617964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.617983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.618330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.618349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.618708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.618726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.619071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.619089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.619404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.619424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.619621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.619642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.619967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.619987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.620327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.620347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.620664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.620681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.621057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.621076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.621438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.621457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.621797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.621814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.622147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.622174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.622513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.622531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.622873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.622892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.623233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.623252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.623616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.623635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.623976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.623995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.624376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.624395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.624736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.624754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.625131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.625149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.625460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.625477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.625793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.625810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.626039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.626055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.626308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.626328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.626681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.626697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.627010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.627027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.627245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.627264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.627615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.627632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.628006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.628023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.628234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.628255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.628590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.628608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.628944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.628960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.353 [2024-11-20 09:14:53.629307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.353 [2024-11-20 09:14:53.629324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.353 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.629654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.629671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.630005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.630023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.630219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.630238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.630585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.630602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.630917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.630934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.631288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.631307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.631655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.631673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.632020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.632037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.632370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.632387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.632597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.632613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.632931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.632950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.633290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.633308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.633652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.633671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.634011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.634030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.634133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.634148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.634468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.634486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.634814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.634832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.635170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.635188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.635544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.635561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.635903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.635921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.636228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.636247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.636628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.636646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.636982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.636998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.637334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.637352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.637692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.637709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.638029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.638046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.638269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.638287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.638642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.638660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.638894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.638911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.639115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.639131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.639472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.639490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.639831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.639848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.640188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.640207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.640571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.640591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.640919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.640937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.641278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.641296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.641645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.641667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.354 [2024-11-20 09:14:53.642011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.354 [2024-11-20 09:14:53.642028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.354 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.642379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.642397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.642733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.642751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.643085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.643102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.643433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.643450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.643791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.643809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.644187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.644205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.644446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.644464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.644810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.644827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.645035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.645052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.645411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.645430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.645776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.645794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.646124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.646140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.646373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.646391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.646733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.646749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.647090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.647107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.647495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.647515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.647848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.647866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.648204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.648223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.648570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.648588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.648777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.648795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.649135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.649152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.649495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.649513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.649819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.649835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.650156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.650182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.650530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.650548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.650871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.650889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.650975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.650992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.651222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.651239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.651593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.651611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.651951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.651968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.652313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.652330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.652662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.652680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.652895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.652913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.653260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.653278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.653521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.653537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.653885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.653903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.654109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.654124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.355 qpair failed and we were unable to recover it. 00:29:28.355 [2024-11-20 09:14:53.654476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.355 [2024-11-20 09:14:53.654494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.654691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.654715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.655092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.655110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.655339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.655358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.655691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.655710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.656049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.656067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.656400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.656419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.656761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.656777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.656992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.657011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.657386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.657405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.657785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.657803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.658139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.658157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.658519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.658536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.658753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.658769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.659057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.659075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.659349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.659366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.659712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.659728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.660075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.660094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.660290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.660310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.660570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.660588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.660950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.660968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.661322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.661341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.661676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.661694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.662012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.662030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.662378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.662397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.662708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.662725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.663066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.663084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.663272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.663291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.663644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.663662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.664002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.664019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.664225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.664246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.664605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.664623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.664899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.664916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.665265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.665285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.665628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.665645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.665954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.665970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.666324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.666344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.666679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.666696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.356 qpair failed and we were unable to recover it. 00:29:28.356 [2024-11-20 09:14:53.667027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.356 [2024-11-20 09:14:53.667045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.667270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.667288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.667627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.667646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.667866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.667884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.668238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.668256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.668593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.668612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.668966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.668984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.669332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.669351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.669711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.669728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.670068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.670088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.670402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.670421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.670782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.670799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.671185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.671202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.671547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.671564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.671904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.671922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.672170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.672189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.672547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.672567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.672892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.672910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.673249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.673267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.673620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.673637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.673980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.673997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.674312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.674329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.674654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.674673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.675044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.675060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.675266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.675284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.675636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.675654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.676002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.676021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.676337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.676354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.676716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.676734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.677076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.677092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.677435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.677457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.677792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.677810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.678024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.678042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.678380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.357 [2024-11-20 09:14:53.678397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.357 qpair failed and we were unable to recover it. 00:29:28.357 [2024-11-20 09:14:53.678616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.678634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.678878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.678894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.679231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.679249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.679658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.679675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.680047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.680065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.680413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.680430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.680774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.680790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.681030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.681046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.681356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.681374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.681720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.681737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.682079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.682097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.682411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.682430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.682772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.682789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.683120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.683138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.683453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.683471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.683796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.683814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.684032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.684049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.684362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.684381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.684752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.684769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.684999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.685017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.685258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.685274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.685619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.685636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.685856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.685872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.686212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.686229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.686447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.686466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.686807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.686825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.687059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.687078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.687420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.687440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.687746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.687762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.688100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.688119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.688343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.688363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.688730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.688748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.689082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.689100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.689430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.689449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.689788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.689806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.690130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.690148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.690467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.358 [2024-11-20 09:14:53.690489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.358 qpair failed and we were unable to recover it. 00:29:28.358 [2024-11-20 09:14:53.690823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.690841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.691181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.691199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.691514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.691533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.691873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.691891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.692103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.692120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.692437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.692456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.692671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.692689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.693020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.693038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.693381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.693399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.693722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.693738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.693953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.693972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.694327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.694347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.694544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.694561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.694906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.694925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.695244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.695262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.695592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.695609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.695939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.695957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.696307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.696324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.696682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.696699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.696881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.696899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.697229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.697246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.697591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.697608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.697942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.697959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.698175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.698192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.698523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.698540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.698882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.698898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.699269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.699287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.699630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.699649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.699977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.699994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.700336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.700353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.700690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.700708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.701047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.701065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.701404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.701422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.701736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.701753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.702089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.702108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.702438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.702456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.702794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.702813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.703142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.359 [2024-11-20 09:14:53.703167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.359 qpair failed and we were unable to recover it. 00:29:28.359 [2024-11-20 09:14:53.703545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.703561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.703893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.703915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.704231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.704248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.704600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.704619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.704931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.704948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.705291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.705309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.705632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.705650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.705982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.706002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.706315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.706332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.706704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.706721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.707089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.707107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.707467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.707487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.707818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.707835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.708184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.708202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.708548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.708564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.708898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.708917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.709236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.709255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.709597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.709615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.709960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.709977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.710316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.710336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.710663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.710681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.711019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.711037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.711350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.711369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.711702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.711720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.712034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.712052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.712389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.712408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.712776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.712795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.713135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.713153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.713545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.713563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.713910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.713927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.714272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.714290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.714576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.714592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.714950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.714967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.715297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.715314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.715496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.715514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.715842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.715861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.716176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.716194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.360 [2024-11-20 09:14:53.716534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.360 [2024-11-20 09:14:53.716553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.360 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.716889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.716906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.717234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.717252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.717612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.717630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.717962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.717984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.718308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.718326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.718656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.718675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.719010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.719027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.719384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.719401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.719614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.719632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.719838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.719856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.720184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.720201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.720555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.720572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.720936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.720953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.721327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.721346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.721688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.721705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.722057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.722075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.722413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.722431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.722770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.722790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.723114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.723132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.723448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.723467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.723756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.723774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.723951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.723972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.724310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.724328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.724667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.724685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.725019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.725037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.725285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.725301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.725626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.725643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.726023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.726041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.726391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.726410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.726747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.726764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.727102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.727122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.727464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.727482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.727814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.727830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.728176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.728193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.728535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.728554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.728894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.728913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.729231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.729248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.361 [2024-11-20 09:14:53.729588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.361 [2024-11-20 09:14:53.729606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.361 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.729919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.729938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.730173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.730192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.730541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.730559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.730893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.730910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.731228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.731246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.731597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.731617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.731954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.731973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.732308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.732325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.732659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.732679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.733053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.733071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.733415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.733433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.733770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.733787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.733973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.733992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.734338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.734356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.734740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.734759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.735097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.735116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.735449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.735467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.735804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.735822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.736148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.736175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.736513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.736532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.736842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.736860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.737195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.737214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.737574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.737591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.737948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.737966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.738179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.738200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.738529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.738547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.738880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.738898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.739234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.739252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.739580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.739599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.739935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.739953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.740285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.740305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.740640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.740657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.740979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.362 [2024-11-20 09:14:53.740998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.362 qpair failed and we were unable to recover it. 00:29:28.362 [2024-11-20 09:14:53.741222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.741239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.741581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.741600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.741939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.741956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.742297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.742315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.742633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.742650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.743005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.743023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.743419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.743439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.743781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.743798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.744134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.744152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.744490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.744508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.744847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.744866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.745180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.745199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.745546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.745568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.745892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.745909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.746231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.746249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.746641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.746659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.746989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.747006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.747335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.747353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.747687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.747706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.747922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.747942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.748273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.748292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.748626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.748644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.748975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.748995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.749342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.749360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.749709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.749727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.750062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.750081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.750420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.750437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.750774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.750791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.751136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.751153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.751365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.751383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.751725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.751743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.752056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.752075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.752400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.752418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.752611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.752629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.752960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.752978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.753317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.753337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.753670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.753689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.754026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.363 [2024-11-20 09:14:53.754045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.363 qpair failed and we were unable to recover it. 00:29:28.363 [2024-11-20 09:14:53.754386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.754405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.754742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.754762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.755114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.755132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.755455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.755474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.755701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.755719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.756048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.756067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.756406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.756424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.756766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.756784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.757112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.757130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.757337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.757358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.757685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.757703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.758041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.758059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.758397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.758416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.758745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.758763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.759098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.759120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.759436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.759454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.759790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.759808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.760124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.760143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.760494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.760513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.760888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.760907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.761229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.761248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.761591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.761609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.761839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.761855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.762185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.762205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.762542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.762559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.762895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.762913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.763265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.763285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.763618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.763636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.763838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.763856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.764185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.764205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.764533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.764551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.764890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.764907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.765236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.765254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.765596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.765613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.765953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.765973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.766318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.766336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.766667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.766686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.364 [2024-11-20 09:14:53.767015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.364 [2024-11-20 09:14:53.767033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.364 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.767374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.767392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.767730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.767747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.768098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.768117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.768349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.768368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.768693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.768712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.769054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.769073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.769354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.769372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.769552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.769570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.769919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.769937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.770284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.770302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.770641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.770661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.770977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.770994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.771414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.771432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.771662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.771678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.772022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.772040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.772380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.772399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.772750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.772771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.773096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.773115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.773461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.773478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.773811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.773830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.774156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.774187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.774558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.774577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.774911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.774928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.775094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.775112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.775445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.775463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.775796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.775815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.776155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.776184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.776489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.776506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.776850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.776869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.777289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.777307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.777652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.777670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.778046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.778063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.778402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.778421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.778760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.778777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.779115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.779133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.779459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.779477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.779812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.779830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.365 [2024-11-20 09:14:53.780172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.365 [2024-11-20 09:14:53.780190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.365 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.780526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.780544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.780919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.780937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.781231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.781250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.781586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.781603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.781937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.781957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.782182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.782202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.782534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.782552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.782885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.782901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.783235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.783254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.783582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.783599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.783936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.783955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.784293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.784311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.784651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.784670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.785004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.785022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.785340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.785357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.785687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.785703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.786027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.786044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.786374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.786391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.786736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.786758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.787087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.787105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.787444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.787463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.787779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.787796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.788132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.788152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.788327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.788346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.788687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.788704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.789036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.789054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.789401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.789419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.789750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.789768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.790104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.790121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.790441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.790458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.790793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.790812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.791149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.791180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.791514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.791531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.366 qpair failed and we were unable to recover it. 00:29:28.366 [2024-11-20 09:14:53.791846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.366 [2024-11-20 09:14:53.791863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.792201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.792218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.792550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.792567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.792791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.792808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.793143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.793169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.793481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.793500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.793836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.793855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.794192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.794210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.794525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.794542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.794870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.794887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.795218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.795236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.795452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.795470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.795799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.795818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.796181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.796199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.796536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.796554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.796906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.796922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.797263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.797282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.797617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.797635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.797974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.797993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.798330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.798348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.798694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.798713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.799054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.799071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.799413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.799433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.799773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.799791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.800136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.800154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.800490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.800511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.800839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.800858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.801194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.801212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.801532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.801549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.801878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.801895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.802233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.802252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.802590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.802607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.802939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.802955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.803199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.803217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.803557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.803576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.803918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.803935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.804272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.804289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.804678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.367 [2024-11-20 09:14:53.804695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.367 qpair failed and we were unable to recover it. 00:29:28.367 [2024-11-20 09:14:53.805036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.805053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.805384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.805402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.805735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.805752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.806069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.806085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.806394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.806413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.806745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.806763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.807087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.807104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.807461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.807480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.807679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.807698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.808047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.808065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.808406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.808424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.808764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.808781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.809170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.809189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.809524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.809541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.809872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.809891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.810211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.810229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.810573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.810591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.810924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.810941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.811276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.811295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.811641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.811659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.811997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.812016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.812251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.812268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.812600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.812618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.812954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.812971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.813310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.813327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.813665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.813682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.813998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.814015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.814344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.814366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.814713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.814732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.815070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.815087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.815423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.815443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.815809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.815826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.816168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.816187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.816527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.816545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.816858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.816875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.817192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.817209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.817554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.817573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.817899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.368 [2024-11-20 09:14:53.817917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.368 qpair failed and we were unable to recover it. 00:29:28.368 [2024-11-20 09:14:53.818230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.818249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.818623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.818642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.818975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.818992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.819337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.819354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.819702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.819718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.820045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.820064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.820428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.820446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.820783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.820802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.821170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.821188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.821525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.821544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.821875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.821892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.822233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.822251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.822607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.822623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.822941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.822957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.823305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.823322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.823691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.823708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.824032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.824049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.824378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.824395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.824595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.824614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.824913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.824930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.825301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.825318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.825635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.825651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.825987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.826004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.826186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.826205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.826551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.826569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.826897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.826916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.827245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.827262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.827604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.827623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.827936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.827953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.828272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.828294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.828626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.828643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.828983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.829001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.829377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.829395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.829711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.829728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.830133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.830150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.830463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.830481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.830806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.830823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.369 [2024-11-20 09:14:53.831139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.369 [2024-11-20 09:14:53.831156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.369 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.831500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.831517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.831855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.831874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.832081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.832099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.832428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.832446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.832780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.832797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.833125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.833141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.833460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.833478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.833797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.833814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.834143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.834166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.834532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.834550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.834865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.834882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.835199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.835217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.835547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.835566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.835904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.835923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.836234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.836251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.836565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.836582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.836921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.836938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.837270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.837287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.837505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.837522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.837851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.837870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.838206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.838224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.838568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.838586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.838786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.838808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.839136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.839154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.839493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.839513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.839845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.839862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.840177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.840194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.840505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.840523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.840866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.840885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.841231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.841249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.841587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.841605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.841935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.841952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.842283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.842301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.842646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.842662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.842993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.843012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.843326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.843343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.843683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.843702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.370 qpair failed and we were unable to recover it. 00:29:28.370 [2024-11-20 09:14:53.844041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.370 [2024-11-20 09:14:53.844058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.844389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.844406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.844738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.844755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.845090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.845110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.845445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.845463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.845782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.845798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.846134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.846150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.846467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.846484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.846821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.846838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.847181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.847199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.847536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.847554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.847932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.847950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.848285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.848302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.848641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.848657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.848979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.848997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.849406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.849423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.849756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.849775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.850108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.850124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.850459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.850478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.850801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.850818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.851123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.851140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.851470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.851493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.851826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.851845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.852178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.852196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.852538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.852556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.852867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.852884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.853206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.853224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.853556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.853576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.853911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.853928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.854312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.854331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.854682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.854699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.855036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.855055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.855388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.855406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.371 [2024-11-20 09:14:53.855720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.371 [2024-11-20 09:14:53.855738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.371 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.856053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.856070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.856442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.856461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.856754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.856770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.857119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.857136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.857352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.857368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.857720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.857737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.858079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.858096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.858450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.858468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.858810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.858825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.859175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.859190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.859415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.859429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.859785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.859800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.860137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.860151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.860523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.860538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.860873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.860887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.372 [2024-11-20 09:14:53.861210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.372 [2024-11-20 09:14:53.861225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.372 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.861463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.861481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.861831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.861845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.862180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.862195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.862530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.862547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.862887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.862904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.863180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.863198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.863542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.863559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.863907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.863924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.864230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.864250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.864600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.864621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.864962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.864980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.865299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.865322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.865673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.865690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.866029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.866047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.866390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.866409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.866626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.866645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.866982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.866999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.867336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.867354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.867684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.867703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.868038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.868058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.868380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-11-20 09:14:53.868399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-11-20 09:14:53.868629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.868648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.868913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.868930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.869309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.869328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.869697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.869715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.870047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.870067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.870384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.870403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.870737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.870755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.871077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.871095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.871404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.871423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.871759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.871776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.872094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.872112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.872454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.872473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.872684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.872702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.872928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.872949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.873274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.873293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.873627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.873645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.873949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.873968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.874280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.874298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.874653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.874672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.875007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.875025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.875367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.875384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.875729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.875747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.876086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.876103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.876422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.876440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.876774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.876792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.877046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.877063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.877396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.877415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.877685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.877701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.878032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.878050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.878389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.878406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.878586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.878607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.878935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.878952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.879290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.879307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.879642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.879658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.879980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.879996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.880220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.880238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.880596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.880613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-11-20 09:14:53.880832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-11-20 09:14:53.880850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.881064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.881082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.881402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.881420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.881781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.881802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.882138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.882156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.882475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.882494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.882831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.882850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.883199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.883218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.883571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.883588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.883909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.883927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.884142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.884174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.884528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.884547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.884886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.884903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.885217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.885235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.885543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.885560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.885789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.885806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.886032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.886048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.886380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.886398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.886780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.886798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.887142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.887167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.887505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.887521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.887864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.887881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.888208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.888225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.888578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.888597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.888788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.888807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.889024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.889041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.889377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.889396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.889740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.889757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.890101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.890118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.890321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.890339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.890669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.890686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.891025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.891042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.891379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.891396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.891726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.891749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.892063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.892081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.892420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.892437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.892791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.892809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.893148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-11-20 09:14:53.893174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-11-20 09:14:53.893525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.893543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.893831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.893848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.894259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.894277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.894578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.894595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.894926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.894944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.895286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.895304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.895643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.895660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.896004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.896022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.896371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.896389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.896729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.896746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.897076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.897094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.897414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.897432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.897768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.897786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.898138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.898157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.898387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.898405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.898739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.898756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.899058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.899074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.899426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.899445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.899777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.899794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.899990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.900010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.900371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.900389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.900729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.900748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.901105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.901124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.901464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.901481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.901800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.901817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.902176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.902193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.902520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.902538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.902742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.902761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.903048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.903065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.903419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.903438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.903777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.903795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.904130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.904150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.904468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.904486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.904718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.904734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.905071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.905090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.905286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.905306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.905651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-11-20 09:14:53.905669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-11-20 09:14:53.906009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.906027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.906381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.906400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.906736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.906753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.907071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.907088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.907443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.907461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.907848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.907867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.908203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.908221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.908551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.908568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.908914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.908932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.909269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.909287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.909711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.909729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.910065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.910083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.910433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.910450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.910791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.910808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.911030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.911048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.911387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.911404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.911583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.911602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.911937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.911956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.912307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.912325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.912507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.912525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.912871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.912889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.913232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.913250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.913616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.913633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.913956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.913974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.914318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.914336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.914685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.914702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.915045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.915063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.915404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.915424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.915760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.915780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.916111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.916129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.916462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.916480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.916665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.916682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.917046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.917065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.917409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.917427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.917780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.917798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-11-20 09:14:53.918132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-11-20 09:14:53.918167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.918463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.918480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.918825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.918841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.919052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.919072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.919420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.919438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.919796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.919814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.920023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.920040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.920394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.920412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.920724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.920742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.920927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.920945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.921257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.921274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.921625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.921643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.922004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.922022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.922357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.922375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.922716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.922734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.923076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.923093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.923413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.923432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.923777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.923796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.924138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.924155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.924471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.924489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.924853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.924871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.925227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.925244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.925462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.925479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.925812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.925829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.926214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.926232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.926566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.926582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.926917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.926934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.927251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.927269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.927499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.927517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.927749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.927765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.928099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.928117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.928437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.928454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.928806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.928824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.929172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.929192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-11-20 09:14:53.929549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-11-20 09:14:53.929565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.929907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.929925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.930271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.930289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.930645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.930663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.930998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.931015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.931383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.931403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.931743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.931761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.932109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.932128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.932506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.932524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.932846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.932869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.933211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.933229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.933572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.933591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.933929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.933945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.934273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.934290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.934638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.934655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.934994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.935012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.935230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.935248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.935583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.935600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.935932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.935949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.936291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.936308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.936636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.936653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.936985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.937002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.937230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.937247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.937600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.937618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.937974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.937992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.938222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.938239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.938612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.938630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.938836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.938852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.939192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.939210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.939516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.939533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.939893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.939910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.940241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.940258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.940439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.940458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.940770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.940788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.941109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.941126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.941470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.941490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.941824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.941842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.942175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.942192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-11-20 09:14:53.942406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-11-20 09:14:53.942424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.942754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.942773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.943110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.943127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.943447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.943466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.943797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.943814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.944038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.944055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.944457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.944475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.944809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.944829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.945164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.945181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.945518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.945537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.945871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.945889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.946203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.946224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.946529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.946546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.946880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.946897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.947228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.947246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.947599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.947615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.947945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.947964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.948294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.948312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.948649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.948667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.948982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.948998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.949310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.949327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.949656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.949673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.950006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.950024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.950375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.950392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.950728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.950745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.950964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.950981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.951221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.951238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.951578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.951595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.951928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.951947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.952260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.952276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.952597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.952614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.952945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.952961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.953281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.953298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.953640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.953657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.953996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.954013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.954250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.954267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.954621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.954640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.954975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.954993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-11-20 09:14:53.955196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-11-20 09:14:53.955215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.955532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.955548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.955857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.955874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.956204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.956222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.956558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.956575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.956892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.956908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.957117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.957134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.957470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.957489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.957817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.957834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.958151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.958174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.958488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.958505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.958838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.958854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.959190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.959207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.959522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.959543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.959770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.959786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.960129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.960147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.960523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.960541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.960854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.960871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.961185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.961203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.961584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.961601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.961939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.961957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.962295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.962313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.962526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.962542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.962878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.962895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.963230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.963248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.963582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.963599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.963916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.963933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.964272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.964290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.964629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.964648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.964967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.964985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.965202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.965222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.965561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.965578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.965955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.965973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.966315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.966333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.966670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.966689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.967058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.967076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.967414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.967432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.967773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.967790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-11-20 09:14:53.968111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-11-20 09:14:53.968128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.968444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.968462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.968790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.968809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.969141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.969164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.969472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.969488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.969810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.969827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.970188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.970205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.970514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.970530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.970846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.970863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.971188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.971206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.971541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.971559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.971889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.971906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.972289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.972308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.972628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.972645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.972990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.973009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.973346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.973368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.973678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.973695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.974034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.974051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.974375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.974393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.974707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.974723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.975057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.975076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.975415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.975433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.975770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.975787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.976127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.976144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.976503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.976523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.976863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.976881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.977223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.977241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.977608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.977626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.977942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.977959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.978300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.978318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.978653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.978672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.978985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.979003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.979321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-11-20 09:14:53.979339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-11-20 09:14:53.979688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.979705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.980043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.980061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.980393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.980410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.980739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.980758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.981093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.981110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.981497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.981515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.981849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.981866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.982200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.982218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.982556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.982573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.982912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.982930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.983246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.983264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.983633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.983651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.983993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.984011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.984340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.984357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.984541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.984559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.984888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.984905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.985243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.985260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.985592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.985610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.985805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.985822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.986164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.986183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.986525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.986542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.986880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.986898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.987211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.987232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.987587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.987605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.987945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.987963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.988298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.988315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.988651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.988668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.989019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.989037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.989380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.989397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.989736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.989754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.990086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.990104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.990501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.990520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.990734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.990752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.991095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.991113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.991453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.991471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.991786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.991803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.992139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.992156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.992496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-11-20 09:14:53.992514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-11-20 09:14:53.992867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.992886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.993190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.993208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.993547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.993566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.993901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.993920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.994184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.994201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.994541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.994559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.994777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.994795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.995118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.995134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.995450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.995468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.995821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.995837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.996171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.996189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.996525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.996544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.996869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.996885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.997200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.997218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.997582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.997599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.997940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.997959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.998276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.998295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.998631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.998648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.998987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.999003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.999348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.999365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:53.999703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:53.999719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.000034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.000051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.000396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.000413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.000632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.000648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.000960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.000981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.001311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.001329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.001668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.001687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.002021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.002037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.002260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.002278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.002613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.002631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.002834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.002852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.003137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.003155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.003469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.003486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.003804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.003820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.004170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.004188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.004525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.004542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.004870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-11-20 09:14:54.004890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-11-20 09:14:54.005217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.005235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.005576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.005595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.005932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.005949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.006274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.006292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.006684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.006701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.006995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.007013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.007330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.007348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.007675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.007692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.008023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.008040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.008380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.008398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.008734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.008751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.008978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.008995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.009330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.009349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.009696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.009715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.010072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.010090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.010458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.010477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.010689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.010709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.010938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.010956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.011285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.011303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.011643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.011660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.011976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.011992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.012333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.012351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.012719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.012736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.013080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.013097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.013418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.013436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.013772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.013791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.014129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.014146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.014548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.014566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.014906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.014925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.015254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.015272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.015603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.015621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.015952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.015970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.016301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.016318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.016656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.016673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.017025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.017042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.017381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.017398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.017713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.017731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-11-20 09:14:54.017922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-11-20 09:14:54.017941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.018268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.018286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.018628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.018645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.018960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.018976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.019326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.019344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.019679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.019696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.020035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.020052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.020365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.020382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.020720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.020738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.021075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.021093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.021431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.021449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.021667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.021683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.022012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.022030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.022340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.022357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.022734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.022752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.023091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.023109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.023322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.023341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.023675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.023696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.024028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.024045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.024389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.024406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.024736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.024755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.025089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.025106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.025440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.025459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.025789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.025806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.026179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.026198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.026531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.026548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.026873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.026889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.027209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.027227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.027545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.027562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.027915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.027932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.028137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.028155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.028490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.028509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.028844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.028862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.029200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.029217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.029425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.029442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.029773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.660 [2024-11-20 09:14:54.029792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.660 qpair failed and we were unable to recover it. 00:29:28.660 [2024-11-20 09:14:54.030125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.030142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.030542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.030560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.030907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.030926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.031254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.031271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.031608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.031627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.031959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.031975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.032294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.032311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.032658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.032674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.033015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.033034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.033312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.033330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.033665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.033684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.033998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.034015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.034338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.034355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.034689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.034706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.035052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.035072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.035409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.035427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.035762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.035781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.036120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.036137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.036453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.036471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.036804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.036821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.037169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.037187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.037532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.037554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.037891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.037909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.038233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.038251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.038605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.038624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.038956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.038973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.039297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.661 [2024-11-20 09:14:54.039315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.661 qpair failed and we were unable to recover it. 00:29:28.661 [2024-11-20 09:14:54.039657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.039674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.040052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.040070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.040412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.040430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.040744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.040761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.041094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.041110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.041438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.041456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.041795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.041811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.042127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.042143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.042472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.042491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.042720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.042739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.043060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.043077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.043402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.043421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.043759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.043778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.044110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.044128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.044479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.044499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.044828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.044846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.045185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.045202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.045547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.045565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.045907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.045926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.046237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.046255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.046640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.046660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.046876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.046895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.047236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.047254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.047601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.047620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.047955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.047973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.048209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.048226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.048478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.048496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.048827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.048844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.049167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.049185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.049392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.049409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.049728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.049745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.050085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.050103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.050417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.050434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.050755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.050773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.050988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.051009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.051340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.051358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.051697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.051714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.052073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.662 [2024-11-20 09:14:54.052090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.662 qpair failed and we were unable to recover it. 00:29:28.662 [2024-11-20 09:14:54.052414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.052431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.052766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.052783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.053105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.053122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.053454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.053472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.053811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.053830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.054167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.054185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.054509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.054526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.054708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.054727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.055071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.055089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.055403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.055421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.055750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.055766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.056107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.056125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.056463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.056481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.056699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.056715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.056959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.056976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.057195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.057215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.057549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.057566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.057883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.057899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.058215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.058233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.058585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.058604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.058958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.058975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.059317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.059336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.059530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.059549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.059895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.059914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.060252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.060270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.060616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.060634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.060949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.060966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.061184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.061201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.061556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.061574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.061886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.061902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.062233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.062250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.062489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.062505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.062829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.062846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.063174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.063192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.063570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.063587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.063909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.063928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.064271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.064297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.064696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.663 [2024-11-20 09:14:54.064715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.663 qpair failed and we were unable to recover it. 00:29:28.663 [2024-11-20 09:14:54.065054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.065071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.065401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.065419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.065766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.065785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.066068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.066086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.066436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.066454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.066759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.066777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.067113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.067130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.067472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.067492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.067819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.067836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.068185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.068204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.068538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.068556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.068870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.068886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.069101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.069118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.069505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.069523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.069827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.069844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.070171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.070188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.070503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.070520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.070846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.070862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.071201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.071219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.071568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.071585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.071927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.071945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.072284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.072302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.072676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.072693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.072901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.072917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.073275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.073293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.073642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.073659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.073951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.073967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.074277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.074295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.074620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.074637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.074939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.074956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.075305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.075322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.075668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.075687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.076004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.076023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.076254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.076272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.076606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.076624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.076935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.076952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.077296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.077316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.664 qpair failed and we were unable to recover it. 00:29:28.664 [2024-11-20 09:14:54.077658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.664 [2024-11-20 09:14:54.077674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.078008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.078029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.078250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.078270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.078646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.078663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.078995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.079012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.079349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.079367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.079576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.079595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.079970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.079987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.080376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.080394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.080731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.080748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.081086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.081105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.081446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.081464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.081761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.081778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.082073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.082090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.082412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.082429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.082764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.082782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.083060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.083077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.083397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.083415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.083647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.083663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.084019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.084038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.084344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.084362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.084690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.084707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.085041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.085061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.085418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.085435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.085768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.085787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.086123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.086141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.086461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.086481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.086711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.086729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.086937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.086955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.087299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.087317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.087619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.087636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.087949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.087967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.088308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.665 [2024-11-20 09:14:54.088328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.665 qpair failed and we were unable to recover it. 00:29:28.665 [2024-11-20 09:14:54.088677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.088694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.089047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.089066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.089337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.089356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.089723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.089741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.090085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.090102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.090422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.090440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.090798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.090815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.091027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.091044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.091370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.091391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.091583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.091602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.091955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.091973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.092306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.092326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.092537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.092556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.092892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.092911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.093235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.093254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.093594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.093611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.093952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.093969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.094296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.094313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.094576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.094593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.094917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.094934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.095248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.095266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.095629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.095647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.095968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.095984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.096311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.096329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.096675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.096694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.096884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.096904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.097212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.097230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.097569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.097586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.097926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.097943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.098273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.098290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.666 [2024-11-20 09:14:54.098640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.666 [2024-11-20 09:14:54.098657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.666 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.099007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.099027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.099351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.099368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.099593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.099610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.099947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.099964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.100279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.100297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.100623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.100639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.100968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.100985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.101229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.101247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.101591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.101607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.101950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.101967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.102308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.102327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.102617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.102633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.102981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.103000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.103327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.103344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.103687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.103704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.103925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.103943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.104261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.104280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.104492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.104515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.104756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.104774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.105092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.105111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.105445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.105462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.105798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.105816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.106149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.106174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.106502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.106519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.106850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.106867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.107207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.107225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.107599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.107615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.107949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.107965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.108299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.108318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.108683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.108701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.109062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.109079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.109393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.109413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.109797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.109814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.110168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.110185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.110553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.110571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.110873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.667 [2024-11-20 09:14:54.110890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.667 qpair failed and we were unable to recover it. 00:29:28.667 [2024-11-20 09:14:54.111233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.111252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.111496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.111513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.111866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.111883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.112227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.112245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.112491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.112507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.112737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.112755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.113079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.113097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.113467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.113486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.113826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.113844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.114184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.114202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.114550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.114570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.114904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.114921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.115198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.115216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.115621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.115638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.115856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.115874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.116118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.116136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.116504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.116523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.116807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.116823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.117056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.117072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.117314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.117332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.117657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.117674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.117997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.118017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.118394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.118412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.118643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.118660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.118984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.119001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.119324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.119342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.119670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.119688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.120047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.120065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.120412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.120431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.120673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.120691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.120933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.120951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.121295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.121313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.121673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.121689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.122036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.122054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.122465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.122483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.122827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.122845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.123188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.123206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.123566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.668 [2024-11-20 09:14:54.123583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.668 qpair failed and we were unable to recover it. 00:29:28.668 [2024-11-20 09:14:54.123716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.123733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.123929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.123946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.124236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.124255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.124585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.124604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.124944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.124961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.125287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.125305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.125673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.125690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.126037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.126054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.126198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.126216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.126543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.126562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.126901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.126920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.127284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.127303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.127666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.127684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.127910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.127927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.128259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.128278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.128566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.128584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.128930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.128947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.129273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.129290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.129484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.129500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.129835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.129853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.130193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.130211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.130548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.130565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.130900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.130920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.131224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.131248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.131547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.131565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.131907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.131926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.132228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.132246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.132577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.132594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.132907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.132926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.133274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.133293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.133620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.133640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.133988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.134006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.134333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.134351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.134691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.134710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.135025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.135044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.135253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.135271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.135596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.135613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.135959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.669 [2024-11-20 09:14:54.135977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.669 qpair failed and we were unable to recover it. 00:29:28.669 [2024-11-20 09:14:54.136322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.136339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.136567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.136584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.136806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.136826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.137188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.137206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.137570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.137588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.137922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.137941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.138166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.138183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.138545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.138565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.138899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.138917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.139229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.139247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.139591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.139607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.139748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.139766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.140120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.140137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.140467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.140485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.140789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.140806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.141118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.141134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.141501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.141520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.141866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.141886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.142253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.142271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.142597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.142614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.142956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.142973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.143329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.143347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.143738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.143755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.144067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.144084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.144412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.144436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.144773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.144796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.145122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.145141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.145505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.145525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.145858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.145877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.146225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.146243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.146454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.146471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.146813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.146831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.147174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.670 [2024-11-20 09:14:54.147194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.670 qpair failed and we were unable to recover it. 00:29:28.670 [2024-11-20 09:14:54.147488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.147504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.147837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.147855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.148177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.148195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.148406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.148426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.148719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.148737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.149048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.149067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.149388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.149406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.149720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.149737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.150098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.150114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.150423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.150440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.150804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.150822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.151175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.151194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.151430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.151448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.151564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.151578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.151869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.151885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.152228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.152245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.152475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.152491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.152851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.152868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.153212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.153231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.153562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.153579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.153948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.153966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.154314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.154333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.154547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.154565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.154872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.154891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.155210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.155229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.155490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.155506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.155836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.155853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.156193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.156213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.156593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.156610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.156932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.156949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.157174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.157192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.157510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.157527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.157934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.157956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.158330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.158350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.158661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.158678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.671 [2024-11-20 09:14:54.159028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.671 [2024-11-20 09:14:54.159046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.671 qpair failed and we were unable to recover it. 00:29:28.943 [2024-11-20 09:14:54.159378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-11-20 09:14:54.159399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-11-20 09:14:54.159713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-11-20 09:14:54.159732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-11-20 09:14:54.159948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-11-20 09:14:54.159966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-11-20 09:14:54.160301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-11-20 09:14:54.160319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-11-20 09:14:54.160640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-11-20 09:14:54.160660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-11-20 09:14:54.161008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-11-20 09:14:54.161026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-11-20 09:14:54.161382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-11-20 09:14:54.161401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-11-20 09:14:54.161720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-11-20 09:14:54.161737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-11-20 09:14:54.161932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.161950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.162266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.162286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.162588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.162604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.162803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.162821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.163157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.163184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.163509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.163526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.163864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.163881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.164247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.164268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.164611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.164627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.164954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.164972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.165381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.165398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.165749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.165768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.166096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.166113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.166480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.166499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.166821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.166838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.167194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.167214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.167617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.167634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.167959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.167976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.168306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.168324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.168635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.168652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.168846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.168864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.169186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.169208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.169542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.169558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.169894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.169911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.170115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.170135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.170516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.170534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.170869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.170890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.171176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.171195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.171539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.171561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.171900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.171921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.172266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.172286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.172620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.172639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.172977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.172994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.173340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.173358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.173690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.173708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.174056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.174075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-11-20 09:14:54.174405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-11-20 09:14:54.174424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.174622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.174639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.174981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.175000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.175337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.175355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.175705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.175725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.176048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.176065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.176395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.176413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.177200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.177233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.177572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.177591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.177917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.177935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.178284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.178308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.178641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.178659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.178996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.179016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.179333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.179353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.179669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.179691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.180029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.180047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.180388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.180408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.180732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.180752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.181062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.181081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.181478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.181496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.181720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.181738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.182128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.182147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.182482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.182503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.182842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.182861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.183181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.183200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.184879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.184927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.185280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.185300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.185518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.185536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.185918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.185936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.186154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.186182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.186506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.186523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.186866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.186885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.187218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.187241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.187614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.187632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.187961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.187978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.188313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.188331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-11-20 09:14:54.188530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-11-20 09:14:54.188549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.188894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.188917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.189229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.189247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.189634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.189654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.189993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.190012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.190369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.190387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.190703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.190719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.191041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.191059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.191400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.191420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.191753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.191770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.192147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.192174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.192522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.192541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.192877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.192896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.193112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.193130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.193503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.193523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.193860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.193877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.194209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.194227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.194564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.194583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.194886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.194903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.195245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.195262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.195598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.195616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.195935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.195953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.196176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.196194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.196454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.196477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.196811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.196829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.197153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.197180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.197513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.197531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.197864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.197883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.198220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.198239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.198578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.198596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.198928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.198945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.199287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.199307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.199642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.199659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.199990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.200008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.200334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.200351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.200686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.200705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.201034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.201054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-11-20 09:14:54.201430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-11-20 09:14:54.201448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.201775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.201792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.202180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.202199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.202542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.202559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.202891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.202908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.203255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.203276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.203618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.203635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.203970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.203990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.204325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.204343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.204684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.204703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.205041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.205058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.205373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.205390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.205707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.205724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.206046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.206066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.206285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.206303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.206641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.206660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.206980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.206998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.207333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.207351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.207690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.207709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.208085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.208102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.208436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.208457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.208791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.208809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.209125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.209144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.209418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.209436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.209763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.209782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.210112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.210130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.210495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.210518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.210849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.210867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.211183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.211201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.211545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.211564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.211890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.211907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.212241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.212261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.212591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.212608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.212919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.212938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.213268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.213287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.213617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.213634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.213952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.213969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-11-20 09:14:54.214285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-11-20 09:14:54.214303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.214639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.214657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.214994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.215012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.215335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.215354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.215665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.215682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.216024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.216041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.216341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.216359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.216627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.216646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.216983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.217001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.217342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.217359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.217699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.217719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.218053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.218071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.218391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.218408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.218708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.218725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.219058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.219076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.219396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.219415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.219753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.219774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.220137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.220155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.220506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.220524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.220733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.220752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.221081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.221099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.221440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.221457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.221792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.221810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.222145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.222173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.222480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.222500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.222841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.222858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.223194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.223214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.223525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.223543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.223862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.223880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.224205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.224227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.224613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.224631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.224944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.224964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-11-20 09:14:54.225289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-11-20 09:14:54.225307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.225652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.225671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.226007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.226025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.226228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.226247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.226570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.226588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.226806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.226826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.227134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.227151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.227497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.227517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.227833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.227850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.228197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.228217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.228567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.228583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.228927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.228947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.229273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.229291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.229633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.229652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.229985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.230002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.230325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.230343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.230685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.230702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.231021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.231040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.231355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.231372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.231690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.231709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.232044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.232062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.232398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.232416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.232736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.232753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.233079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.233098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.233440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.233458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.233798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.233816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.234164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.234184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.234494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.234511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.234846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.234865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.235198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.235215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.235550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.235567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.235888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.235906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.236257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.236277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.236610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.236627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.236806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.236824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.237174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.237192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.237530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.237548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.237882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.237903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-11-20 09:14:54.238233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-11-20 09:14:54.238253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.238588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.238606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.238951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.238969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.239301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.239319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.239691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.239708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.240019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.240035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.240277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.240295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.240633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.240653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.240984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.241003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.241402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.241422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.241756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.241776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.242123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.242140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.242465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.242485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.242811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.242829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.243171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.243191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.243533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.243551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.243880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.243898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.244233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.244250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.244551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.244568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.244900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.244916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.245257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.245277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.245611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.245628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.245943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.245962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.246297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.246315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.246663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.246683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.246901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.246919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.250189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.250241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-11-20 09:14:54.250625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-11-20 09:14:54.250646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.251024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.251047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.251426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.251450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.251825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.251847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.252186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.252210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.252577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.252598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.252989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.253011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.253492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.253516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.253852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.253873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.254214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.254237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.254573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.254593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.254939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.254961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.255298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.255328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.255699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.255721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.256059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.256081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.256424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.256444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.256775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.256794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.257126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.257146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.257370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.257393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.257762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.257785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.258120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.258138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.258510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.258530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.258876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.258895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.259243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.259263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.259510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.259527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.259872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.259889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.260236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.260256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.260601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.260619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.261021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.261042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.261417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.261437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.261811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.261829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.262183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.262201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-11-20 09:14:54.262555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-11-20 09:14:54.262573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.262909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.262930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.263244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.263266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.263644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.263665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.266203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.266248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.266637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.266662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.267028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.267052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.267425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.267448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.267786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.267809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.268143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.268174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.268515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.268540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.268783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.268806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.269167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.269192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.269551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.269576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.269860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.269882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.270235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.270258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.270603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.270627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.270995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.271018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.271380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.271403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.271711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.271733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.272091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.272122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.272514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.272538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.272877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.272901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.273256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.273279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.273651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.273674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.274029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.274052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.274431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.274455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.274815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.274837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.275187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.275211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.275575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.275598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.275924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.275946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.276281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.276305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.276646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.276669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.277026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.277049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.277389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.277413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-11-20 09:14:54.277768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-11-20 09:14:54.277790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.278140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.278175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.278527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.278549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.278888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.278910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.279283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.279307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.279664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.279687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.280063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.280085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.280425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.280448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.280820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.280842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.281181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.281205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.281603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.281625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.281969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.281992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.282332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.282356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.282697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.282720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.283077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.283100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.283423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.283448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.283812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.283835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.284209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.284233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.284462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.284494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.284824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.284855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.285260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.285292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.285648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.285680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.286035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.286066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.286315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.286347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.286692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.286723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.287083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.287119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.287500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.287533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.287850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.287883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.288242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.288274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.288652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.288684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.289034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-11-20 09:14:54.289066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-11-20 09:14:54.289416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.289447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.289793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.289824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.290178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.290210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.290551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.290582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.290906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.290939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.291295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.291327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.291706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.291738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.292089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.292121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.292496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.292528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.292889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.292920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.293303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.293337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.293693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.293723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.294052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.294085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.294335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.294371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.294786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.294817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.295169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.295201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.295564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.295595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.295960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.295990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.296341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.296374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.296732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.296763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.297098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.297131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.297564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.297597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.297939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.297971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.298322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.298357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.298708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.298739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.299112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.299144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.299498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.299531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.299885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.299916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.300295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.300329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.300697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.300727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.301056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.301089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.301351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.301383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.301627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.301657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.302014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.302045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.302408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.302450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.302848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-11-20 09:14:54.302879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-11-20 09:14:54.303240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.303272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.303626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.303656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.304009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.304040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.304415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.304447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.304773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.304804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.305150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.305192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.305566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.305597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.305939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.305972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.306298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.306330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.306679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.306711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.307061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.307094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.307446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.307478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.307810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.307842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.308189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.308221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.308547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.308577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.308931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.308961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.309282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.309317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.309683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.309713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.309939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.309969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.310232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.310267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.310634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.310665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.310987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.311018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.311383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.311416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.311763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.311795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.312118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.312148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.312543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.312574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.312929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.312960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.313316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.313348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.313645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.313675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.313955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.313986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.314341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.314373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.314634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.314668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.315012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.315043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.315410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.315445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.315788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.315820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.316183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-11-20 09:14:54.316217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-11-20 09:14:54.316596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.316626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.316967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.316998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.317343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.317382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.317710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.317740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.318090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.318122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.318400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.318433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.318775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.318807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.319156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.319201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.319465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.319495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.319658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.319689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.320054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.320085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.320445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.320480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.320887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.320918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.321282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.321314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.321676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.321706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.321947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.321977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.322240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.322271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.322650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.322681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.323034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.323066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.323297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.323329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.323675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.323707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.324062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.324092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.324451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.324484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.324745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.324776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.325128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.325172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.325541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.325573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.325936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.325968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.326322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.326355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.326703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.326735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.327079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.327112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.327405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.327438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.327779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.327812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.328196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.328230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.328568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.328602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.328958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.328990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.329337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.329371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.329713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-11-20 09:14:54.329744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-11-20 09:14:54.330089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.330121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.330489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.330523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.330857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.330886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.331233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.331266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.331605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.331635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.331985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.332023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.332397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.332429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.332762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.332795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.333112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.333142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.333383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.333415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.333767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.333799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.334173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.334205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.334544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.334576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.334829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.334862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.335199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.335232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.335605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.335637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.335980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.336013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.336384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.336416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.336655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.336690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.337069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.337102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.337488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.337522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.337755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.337786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.338138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.338178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.338539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.338571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.338911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.338944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.339173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.339205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.339549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.339579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.339927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.339958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.340325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.340356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.340736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.340766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.341113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-11-20 09:14:54.341144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-11-20 09:14:54.341551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.341584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.341972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.342004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.342377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.342409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.342684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.342717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.343074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.343105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.343477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.343509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.343866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.343897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.344238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.344269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.344644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.344676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.344914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.344945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.345321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.345352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.345731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.345762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.346118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.346151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.346596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.346627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.346990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.347029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.347392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.347424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.347791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.347823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.348209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.348242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.348599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.348629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.348972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.349004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.349396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.349427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.349870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.349901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.350252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.350284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.350640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.350673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.351023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.351054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-11-20 09:14:54.351420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-11-20 09:14:54.351453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.351811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.351843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.352083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.352116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.352526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.352560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.352936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.352969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.353332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.353366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.353745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.353776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.354138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.354180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.354541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.354572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.354936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.354968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.355327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.355359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.355720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.355751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.356093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.356124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.356488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.356521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.356877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.356910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.357285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.357319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.357683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.357715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.357951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.357985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.358356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.358389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.358788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.358819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.359089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.359121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.359521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.359555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.359911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.359943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.360303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.360336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.360700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.360731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.361099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.361130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.361585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.361620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.362044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.362075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.362413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.362447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.362798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.362836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.363206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.363237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.363645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.363677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.364024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.364056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.364419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.364453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.364828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.364859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.365263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-11-20 09:14:54.365295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-11-20 09:14:54.365550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.365579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.365940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.365972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.366334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.366367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.366727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.366759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.367121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.367153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.367498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.367528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.367901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.367931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.368285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.368316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.368688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.368719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.369120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.369151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.369423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.369455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.369820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.369851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.370204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.370236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.370645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.370676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.371041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.371072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.371420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.371451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.371811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.371842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.372207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.372238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.372506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.372537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.372974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.373006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.373388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.373422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.373787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.373817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.374198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.374232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.374593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.374623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.374971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.375004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.375384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.375417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.375781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.375812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.376182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.376213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.376469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.376499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.376876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.376907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.377291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.377323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.377701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.377735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.378101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.378132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.378515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.378553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.378906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-11-20 09:14:54.378937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-11-20 09:14:54.379323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.379356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.379758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.379792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.380138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.380182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.380540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.380570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.380928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.380960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.381322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.381354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.381712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.381745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.382005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.382036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.382386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.382419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.382768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.382797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.383182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.383217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.383578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.383609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.383977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.384009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.384337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.384371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.384736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.384768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.385023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.385052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.385465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.385499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.385867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.385897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.386235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.386266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.386639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.386671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.387115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.387144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.387532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.387564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.387921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.387951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.388342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.388375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.388618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.388652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.389006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.389045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.389419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.389453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.389815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.389846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.390220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.390253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.390626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.390656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.391023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.391055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.391425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.391457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.391747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.391778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.392125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.392156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.392644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.392675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.393027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.393061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-11-20 09:14:54.393322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-11-20 09:14:54.393355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.393709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.393741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.394105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.394136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.394541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.394574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.394943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.394975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.395358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.395390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.395753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.395786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.396151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.396194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.396552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.396583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.396821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.396851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.397219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.397251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.397617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.397647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.398007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.398038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.398418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.398452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.398736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.398766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.399132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.399172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.399510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.399543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.399800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.399833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.400202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.400237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.400618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.400649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.401012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.401045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.401454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.401487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.401708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.401739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.401982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.402013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.402368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.402399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.402767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.402799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.403175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.403208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.403558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.403587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.403961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.403992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.404338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.404377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.404741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.404773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.405140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.405183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.405542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.405573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.405974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.406004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.406376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.406408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.406783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.406815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.962 [2024-11-20 09:14:54.407185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.962 [2024-11-20 09:14:54.407219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.962 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.407351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.407386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.407781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.407813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.408151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.408197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.408571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.408603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.408974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.409005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.409349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.409380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.409738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.409769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.410130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.410186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.410533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.410564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.411002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.411033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.411396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.411432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.411797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.411828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.412196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.412227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.412596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.412629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.412877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.412909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.413276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.413308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.413705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.413738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.414116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.414148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.414526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.414557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.414808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.414839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.415196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.415229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.415620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.415653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.416063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.416095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.416476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.416509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.416884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.416915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.417290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.417322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.417700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.417731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.417949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.417979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.418331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.418364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.418605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.963 [2024-11-20 09:14:54.418634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.963 qpair failed and we were unable to recover it. 00:29:28.963 [2024-11-20 09:14:54.419006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.419036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.419415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.419447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.419812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.419849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.420228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.420260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.420627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.420660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.421079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.421109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.421474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.421506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.421890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.421923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.422276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.422308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.422674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.422706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.423066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.423098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.423363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.423395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.423786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.423816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.424193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.424226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.424593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.424623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.424881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.424912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.425263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.425295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.425675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.425706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.426068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.426099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.426471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.426504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.426871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.426904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.427243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.427276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.427655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.427686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.428056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.428088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.428476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.428509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.428735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.428766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.429113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.429146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.429523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.429554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.429892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.429924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.430288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.430321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.430682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.430713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.430848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.430878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.431150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.431192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.431556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.431588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.431942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.431971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.432332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.432364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.432799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.432830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.964 qpair failed and we were unable to recover it. 00:29:28.964 [2024-11-20 09:14:54.433189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.964 [2024-11-20 09:14:54.433223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.433586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.433615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.433976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.434009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.434345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.434377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.434742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.434772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.435004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.435040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.435467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.435499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.435870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.435901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.436283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.436315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.436666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.436698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.437054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.437084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.437340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.437374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.437748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.437779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.438151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.438197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.438584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.438615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.439015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.439047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.439306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.439337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.439762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.439793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.440145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.440201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.440588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.440620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.440971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.441001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.441339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.441371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.441621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.441655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.442000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.442032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.442418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.442450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.442815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.442848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.443205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.443236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.443596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.443626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.444059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.444090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.444445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.444480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.444841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.444872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.445231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.445264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.445622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.965 [2024-11-20 09:14:54.445654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.965 qpair failed and we were unable to recover it. 00:29:28.965 [2024-11-20 09:14:54.446019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.446050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.446389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.446420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.446776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.446807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.447180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.447212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.447445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.447475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.447713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.447748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.448100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.448131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.448528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.448560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.448931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.448961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.449356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.449389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.449766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.449798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.450172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.450206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.450555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.450592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.450952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.450983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.451343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.451376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.451727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.451758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.452116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.452146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.452496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.452529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.452905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.452936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.453297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.453329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.453704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.453736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.454091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.454122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.454485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.454518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.454873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.454907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.455275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.455306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.455669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.455701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.456060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.456091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.456455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.456488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.456844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.456875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.457209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.457242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.457645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.457676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:28.966 [2024-11-20 09:14:54.458037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.966 [2024-11-20 09:14:54.458069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:28.966 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.458431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.458466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.458820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.458854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.459214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.459247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.459600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.459631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.459992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.460022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.460395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.460428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.460778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.460808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.461191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.461224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.461577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.461606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.462010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.462040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.462396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.462430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.462784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.462817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.463204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.463237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.463596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.463629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.463980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.464010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.464354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.464387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.464742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.464773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.465142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.465185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.465623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.465654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.466019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.466051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.466410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.466448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.466806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.466838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.467142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-11-20 09:14:54.467185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-11-20 09:14:54.467546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.467576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.467823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.467857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.468217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.468251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.468628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.468660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.469015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.469047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.469413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.469444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.469810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.469841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.470205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.470237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.470596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.470626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.470981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.471012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.471266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.471300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.471671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.471702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.472063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.472094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.472459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.472491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.472854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.472884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.473244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.473276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.473658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.473688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.474042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.474071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.474413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.474447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.474805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.474836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.475084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.475113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.475527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.475558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.475893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.475923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.476231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.476262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.476634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.476664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.476914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.476948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.477297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.477329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.477696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.477727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.478083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-11-20 09:14:54.478115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-11-20 09:14:54.478525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.478556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.478900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.478931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.479298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.479331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.479723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.479754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.480125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.480156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.480568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.480599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.480958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.480991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.481239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.481273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.481644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.481684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.482038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.482069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.482308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.482338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.482703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.482734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.483099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.483130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.483492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.483524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.483899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.483932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.484289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.484321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.484770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.484800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.485170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.485203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.485553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.485584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.485824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.485858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.486220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.486253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.486488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.486519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.486898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.486928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.487292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.487326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.487725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.487755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.488119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.488150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.488513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.488545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.488907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.488938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.489274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.489306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.489667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.489699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.490049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-11-20 09:14:54.490080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-11-20 09:14:54.490421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.490451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.490805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.490836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.491250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.491281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.491680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.491712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.492084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.492117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.492517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.492550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.492915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.492947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.493306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.493340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.493695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.493725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.494087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.494118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.494365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.494398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.494765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.494796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.495178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.495210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.495556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.495587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.495926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.495958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.496312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.496345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.496700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.496732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.497089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.497127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.497518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.497551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.497904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.497934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.498281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.498312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.498698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.498728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.499088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.499121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.499514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.499548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.499898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.499930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.500330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.500361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.500560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.500589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.500935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.500967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.501324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-11-20 09:14:54.501356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-11-20 09:14:54.501711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.501743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.502005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.502041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.502422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.502456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.502814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.502846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.503206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.503238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.503634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.503665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.503825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.503854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.504100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.504136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.504529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.504562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.504913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.504945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.505384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.505416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.505775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.505807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.506154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.506203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.506587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.506618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.506875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.506904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.507265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.507299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.507665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.507696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.508054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.508086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.508440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.508472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.508703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.508733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.509102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.509134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.509529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.509561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.509925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.509956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.510312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.510345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.510713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.510744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.511115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.511145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.511540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.511572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.511813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.511842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.512202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.512240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.512596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.512628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.512979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.513010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-11-20 09:14:54.513384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-11-20 09:14:54.513417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.513763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.513796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.514148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.514188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.514554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.514585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.514958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.514989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.515263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.515295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.515673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.515704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.516062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.516093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.516460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.516494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.516841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.516871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.517229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.517260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.517629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.517661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.518031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.518062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.518439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.518470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.518816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.518848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.519203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.519234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.519614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.519643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.519998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.520029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.520276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.520307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.520643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.520674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.521032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.521063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.521424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.521456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.521814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.521845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.522085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.522119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.522506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.522538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-11-20 09:14:54.522892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-11-20 09:14:54.522923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.523285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.523320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.523669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.523699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.524060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.524091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.524446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.524478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.524850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.524881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.525241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.525273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.525641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.525671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.525910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.525942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.526309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.526340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.526585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.526620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.526971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.527004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.527340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.527379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.527735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.527767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.528120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.528152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.528523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.528556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.528785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.528814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.529079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.529113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.529474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.529509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.529866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.529896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.530241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.530275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.530684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.530715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.531064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.531095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.531457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.531491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.531846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.531876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.532244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.532276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.532679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.532711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.533061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.533092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.533317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.533350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.533790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.533822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.534179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.534211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.534572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.534602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.534964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.534994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-11-20 09:14:54.535358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-11-20 09:14:54.535391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.535734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.535765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.536110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.536141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.536419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.536450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.536805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.536836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.537189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.537221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.537614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.537648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.538005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.538036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.538396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.538429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.538785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.538817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.539183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.539215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.539515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.539545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.539898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.539930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.540301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.540333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.540699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.540732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.540991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.541023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.541395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.541428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.541785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.541817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.542182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.542214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.542576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.542616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.542974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.543004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.543346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.543380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.543725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.543756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.544115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.544146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.544539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.544570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.544799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.544829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.545107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.545138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.545524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.545558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.545911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.545943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.546298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.546331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-11-20 09:14:54.546569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-11-20 09:14:54.546603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.546958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.546989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.547359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.547392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.547747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.547778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.548140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.548183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.548534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.548565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.548924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.548953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.549315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.549349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.549711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.549741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.550095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.550127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.550520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.550552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.550921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.550953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.551311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.551343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.551691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.551722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.552088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.552118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.552486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.552517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.552854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.552886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.553231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.553283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.553679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.553711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.554072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.554103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.554331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.554363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.554605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.554638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.555031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.555060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.555298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.555333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.555736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.555767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.556114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.556147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.556458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.556488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.556841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.556872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.557236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.557269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.557650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.557687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-11-20 09:14:54.557929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-11-20 09:14:54.557959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.558314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.558347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.558705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.558737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.559099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.559130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.559527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.559559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.559784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.559818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.560072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.560103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.560495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.560526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.560931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.560963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.561320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.561352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.561716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.561747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.562098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.562130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.562504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.562536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.562897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.562929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.563305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.563339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.563693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.563723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.564083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.564114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.564363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.564398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.564775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.564806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.565176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.565209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.565567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.565600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.565957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.565987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.566351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.566384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.566737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.566768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.567004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.567035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.567396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.567428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.567790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.567822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.568181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.568215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.568494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.568524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.568873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.568905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-11-20 09:14:54.569192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-11-20 09:14:54.569224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.569592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.569624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.569974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.570005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.570359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.570391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.570746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.570775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.571147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.571192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.571545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.571575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.571809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.571840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.572200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.572232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.572633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.572671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.573024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.573054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.573427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.573458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.573857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.573888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.574132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.574178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.574564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.574594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.574944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.574976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.575323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.575354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.575715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.575746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.576099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.576131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.576484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.576515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.576879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.576910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.577283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.577315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.577668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.577700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.578088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.578119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.578489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.578521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.578872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.578906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.579170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.579203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.579589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.579620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.579976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.580009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.580300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.580332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.580686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.580718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.581083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.581115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.581352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.581388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.581754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.581785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-11-20 09:14:54.582147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-11-20 09:14:54.582192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.582566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.582597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.582960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.582998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.583347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.583381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.583751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.583781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.584187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.584220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.584568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.584598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.584956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.584988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.585355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.585387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.585815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.585845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.586199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.586234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.586622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.586653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.587012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.587044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.587289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.587325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.587688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.587721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.588077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.588106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.588512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.588544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.588903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.588934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.589294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.589327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.589724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.589755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.590113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.590145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.590534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.590566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.590928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.590961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.591213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.591245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.591647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.591678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-11-20 09:14:54.592068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-11-20 09:14:54.592100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.592332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.592363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.592731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.592763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.593099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.593130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.593480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.593511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.593739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.593770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.594139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.594184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.594543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.594574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.594926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.594957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.595315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.595347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.595714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.595744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.595989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.596022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.596388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.596421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 882623 Killed "${NVMF_APP[@]}" "$@" 00:29:29.245 [2024-11-20 09:14:54.596796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.596829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.597197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.597232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:29.245 [2024-11-20 09:14:54.597479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.597515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:29.245 [2024-11-20 09:14:54.597885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.597918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:29.245 [2024-11-20 09:14:54.598278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.598309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:29.245 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.245 [2024-11-20 09:14:54.598683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.598714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.599077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.599110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.599516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.599549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.599806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.599837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.600209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.600241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.600503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.600532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.600891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.600922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.601197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.601230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.601606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.601635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-11-20 09:14:54.601882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-11-20 09:14:54.601911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.602071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.602104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.602385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.602417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.602763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.602795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.603055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.603086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.603467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.603499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.603886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.603917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.604282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.604316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.604686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.604718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.605109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.605140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.605571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.605603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.606006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.606037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.606264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.606295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.606678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.606709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=883543 00:29:29.246 [2024-11-20 09:14:54.606969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.607000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 883543 00:29:29.246 [2024-11-20 09:14:54.607343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.607374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 883543 ']' 00:29:29.246 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:29.246 [2024-11-20 09:14:54.607764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.607797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.246 [2024-11-20 09:14:54.608060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.608094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.246 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.246 [2024-11-20 09:14:54.608457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.608491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.246 [2024-11-20 09:14:54.608844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 09:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.246 [2024-11-20 09:14:54.608875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.609231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.609264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.609620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.609652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.610020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.610051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.610298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.610332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.610688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-11-20 09:14:54.610721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-11-20 09:14:54.611121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.611153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.611460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.611492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.611901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.611934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.612288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.612324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.612736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.612768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.613119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.613152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.613550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.613586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.614024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.614056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.614301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.614335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.614607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.614641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.614991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.615026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.615278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.615317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.615685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.615716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.616077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.616108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.616508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.616542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.616908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.616940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.617305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.617339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.617697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.617729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.618132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.618174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.618527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.618559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.618915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.618947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.619367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.619400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.619763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.619795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.620149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.620194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.620552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.620584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.620969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.621003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.621274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.621307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.621571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.621601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.621967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.621997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.622437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.622470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.622809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.622839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.623123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.623155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-11-20 09:14:54.623541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-11-20 09:14:54.623572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.623800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.623831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.624011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.624040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.624308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.624342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.624622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.624654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.625004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.625035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.625412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.625445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.625808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.625840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.626096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.626128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.626592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.626625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.626978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.627011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.627450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.627481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.627829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.627861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.628215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.628248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.628646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.628679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.629051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.629081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.629449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.629487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.629889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.629921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.630291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.630327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.630555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.630595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.630845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.630874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.631286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.631319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.631684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.631717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.632076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.632108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.632504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.632538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.632879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.632911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.633155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.633200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.633446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.633476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.633789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.633818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.634185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.634217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.634566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.634597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-11-20 09:14:54.634961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-11-20 09:14:54.634991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.635373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.635407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.635654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.635685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.635944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.635974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.636183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.636214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.636601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.636631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.636992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.637023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.637390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.637423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.637782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.637813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.638045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.638077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.638465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.638498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.638896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.638928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.639289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.639323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.639684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.639714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.639870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.639900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.640145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.640189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.640571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.640602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.640826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.640856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.641234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.641266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.641639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.641672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.641928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.641960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.642420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.642453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.642807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.642838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.643204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.643237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.643663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.643693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.643925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.643955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.644351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.644385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.644739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.644770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.645140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.645205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.645584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.645614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.645988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.646019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.646404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.646436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.646814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.646846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.647104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-11-20 09:14:54.647135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-11-20 09:14:54.647414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.647447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.647813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.647846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.648111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.648142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.648536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.648567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.648927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.648960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.649327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.649360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.649593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.649623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.650001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.650031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.650280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.650312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.650699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.650730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.651097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.651129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.651563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.651597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.651941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.651973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.652334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.652366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.652722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.652752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.653127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.653184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.653559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.653590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.653841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.653873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.654122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.654153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.654434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.654467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.654823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.654854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.655200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.655233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.655606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.655636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.656015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.656046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.656396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.656427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.656786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.656816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.656927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.656956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.657244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.657276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.657641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.657671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.658037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.658069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.658431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-11-20 09:14:54.658463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-11-20 09:14:54.658855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.658886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.659241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.659273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.659536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.659567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.659935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.659972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.660325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.660356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.660738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.660771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.661133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.661176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.661541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.661574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.661966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.661998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.662240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.662272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.662506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.662541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.662898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.662930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.663297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.663330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.663574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.663605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.663872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.663902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.664237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.664270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.664653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.664683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.665043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.665074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.665291] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:29:29.251 [2024-11-20 09:14:54.665356] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.251 [2024-11-20 09:14:54.665425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.665457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.665831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.665861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.666235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.666266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.666698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.666730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.667078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.667108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.667518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.667551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.667909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.667941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.668179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.668211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.668581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.668611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.668987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.669018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.669393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.669427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.669806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.669839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-11-20 09:14:54.670090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-11-20 09:14:54.670121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.670525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.670559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.670924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.670956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.671307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.671340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.671711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.671742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.672112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.672145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.672575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.672609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.672986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.673017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.673312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.673345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.673584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.673617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.673997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.674030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.674405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.674439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.674806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.674839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.675218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.675251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.675624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.675657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.676013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.676044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.676455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.676487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.676842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.676875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.677239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.677273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.677657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.677689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.678064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.678097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.678345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.678382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.678777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.678809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.679171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.679205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.679589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.679621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.680011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.680051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.680409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.680444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.680707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.680739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.681116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.681148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.681580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.681614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.681981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.682013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.682386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.682420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.682800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-11-20 09:14:54.682830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-11-20 09:14:54.683186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.683220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.683583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.683614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.683958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.683990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.684240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.684272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.684665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.684696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.685062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.685093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.685493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.685528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.685892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.685924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.686291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.686323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.686573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.686607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.686954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.686985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.687426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.687458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.687815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.687845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.688183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.688215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.688486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.688516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.688772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.688803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.689171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.689204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.689579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.689611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.689962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.689992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.690262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.690295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.690656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.690688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.691041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.691073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.691301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.691332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.691711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.691743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.692103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.692134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.692403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.692439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.692795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.692827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.693088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.693120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.693377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.693408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.693761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.693792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.694183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.694216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-11-20 09:14:54.694443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-11-20 09:14:54.694473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.694873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.694910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.695255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.695288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.695658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.695687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.696049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.696079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.696454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.696486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.696863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.696894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.697148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.697189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.697573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.697604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.697958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.697991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.698417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.698448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.698812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.698843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.699192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.699225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.699617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.699648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.700057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.700089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.700466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.700500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.700853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.700883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.701239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.701271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.701647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.701681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.702048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.702078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.702415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.702446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.702807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.702838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.703200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.703234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.703627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.703657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.704026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.704057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.704420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.704454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.704696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.704726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.705089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.705120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.705498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.705533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-11-20 09:14:54.705879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-11-20 09:14:54.705910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.706282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.706315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.706672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.706705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.707045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.707075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.707469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.707503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.707852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.707884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.708181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.708213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.708583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.708613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.708978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.709008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.709361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.709395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.709619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.709650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.709892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.709926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.710288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.710328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.710724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.710755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.711006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.711036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.711411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.711443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.711847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.711879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.712242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.712275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.712516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.712547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.712897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.712925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.713188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.713221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.713572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.713602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.713891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.713922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.714145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.714195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-11-20 09:14:54.714555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-11-20 09:14:54.714586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.715021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.715052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.715424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.715460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.715829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.715860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.716218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.716251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.716655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.716688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.716934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.716963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.717314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.717347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.717705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.717736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.718083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.718114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.718473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.718506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.718861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.718894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.719235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.719268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.719642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.719674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.720023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.720054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.720417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.720450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.720798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.720828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.721270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.721304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.721539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.721571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.721920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.721951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.722391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.722423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.722777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.722808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.723196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.723231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.723595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.723625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.724027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.724058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.724409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.724443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.724799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.724830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.725186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.725218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.725578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.725623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.726013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.726046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.726400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.726434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.726791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-11-20 09:14:54.726821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-11-20 09:14:54.727185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.727220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.727578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.727610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.727980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.728012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.728341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.728372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.728738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.728769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.729124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.729155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.729526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.729559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.729915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.729947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.730305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.730337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.730708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.730740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.731000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.731034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.731397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.731430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.731795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.731826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.732197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.732230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.732613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.732645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.732885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.732914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.733272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.733304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.733664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.733695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.734053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.734083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.734337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.734372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.734491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.734521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.734896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.734930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.735156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.735201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.735549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.735587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.735940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.735971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.736411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.736443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.736794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.736827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.737191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.737224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.737631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.737664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.738019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.738051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-11-20 09:14:54.738417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-11-20 09:14:54.738449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.738794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.738826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.739199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.739233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.739612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.739643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.740001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.740032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.740254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.740285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.740697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.740727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.741091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.741123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.741382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.741415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.741856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.741887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.742237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.742270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.742639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.742668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.743027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.743058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.743408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.743440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.743810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.743844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.744215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.744248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.744629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.744660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.745019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.745050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.745411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.745443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.745812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.745844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.746204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.746238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.746482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.746513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.746732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.746762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.747149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.747210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.747566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.747597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.747939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.747974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.748326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.748360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.750372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.750444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.750845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.750883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.751286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.751321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.751712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.751744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.752112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.752146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.752527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-11-20 09:14:54.752559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-11-20 09:14:54.752912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.259 [2024-11-20 09:14:54.752952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.531 qpair failed and we were unable to recover it. 00:29:29.531 [2024-11-20 09:14:54.753309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.531 [2024-11-20 09:14:54.753347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.531 qpair failed and we were unable to recover it. 00:29:29.531 [2024-11-20 09:14:54.753735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.531 [2024-11-20 09:14:54.753766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.531 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.754133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.754178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.754405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.754436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.754816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.754846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.755208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.755242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.755612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.755644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.756004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.756037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.756370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.756402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.756529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.756557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.756933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.756966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.757315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.757348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.757708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.757740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.758089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.758120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.758403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.758436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.758791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.758823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.759188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.759220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.759569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.759601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.759965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.759998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.760340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.760372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.760728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.760759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.760976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.761007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.761245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.761277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.761645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.761675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.762105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.762136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.762493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.762526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.762932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.762965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.763214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.763246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.763654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.763684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.763899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.763930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.764314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.764346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.764707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.764740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.765097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.765131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.765514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.765547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.765897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.765929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.766326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.766360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.766636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.766666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.767024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.767057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-11-20 09:14:54.767414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-11-20 09:14:54.767447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.767820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.767858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.768076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.768107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.768485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.768517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.768873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.768907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.769264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.769297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.769667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.769698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.770056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.770088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.770474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.770508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.770726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.770756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.770904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:29.533 [2024-11-20 09:14:54.771126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.771168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.771500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.771533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.771781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.771816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.772049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.772084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.772320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.772359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.772712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.772745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.773114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.773145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.773596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.773631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.773997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.774029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.774258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.774292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.774620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.774653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.775055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.775086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.775462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.775493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.775860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.775892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.776236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.776268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.776629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.776660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.777016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.777048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.777324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.777356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.777743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.777773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.778156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.778224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.778471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.778503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.778749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.778780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.779205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.779239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.779622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.779653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.780071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.780101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.780467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.780500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.780904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.780937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.781298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-20 09:14:54.781331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-11-20 09:14:54.781563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.781593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.782090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.782206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.782637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.782673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.782950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.782987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.783482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.783591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.784067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.784107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.784379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.784413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.784772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.784803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.785166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.785200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.785467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.785497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.785855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.785885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.786284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.786317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.786674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.786704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.787103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.787133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.787417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.787448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.787882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.787913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.788317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.788363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.788729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.788763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.789125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.789155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.789573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.789606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.789962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.789995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.790374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.790406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.790765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.790796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.791156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.791196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.791454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.791485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.791870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.791900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.792243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.792276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.792682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.792713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.793072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.793103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.793476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.793509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.793869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.793901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.794288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.794322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.794681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.794714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.795079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.795109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.795476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.795508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.795877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.795909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.796274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.796306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-11-20 09:14:54.796670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-20 09:14:54.796702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.797053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.797085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.797442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.797474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.797844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.797876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.798086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.798117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.798423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.798464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.798876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.798909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.799284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.799316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.799610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.799641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.799863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.799894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.800245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.800276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.800653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.800684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.801050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.801081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.801315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.801350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.801726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.801757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.802124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.802156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.802539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.802569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.802945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.802976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.803389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.803422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.803807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.803848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.804201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.804235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.804468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.804502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.804726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.804755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.805110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.805141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.805560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.805592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.805946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.805980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.806341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.806373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.806741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.806772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.807137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.807182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.807542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.807572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.807815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.807845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.808224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.808257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.808611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.808644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.809006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.809037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.809403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.809436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.809811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.809843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.810201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.810232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.810624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-20 09:14:54.810656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-11-20 09:14:54.810788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.810822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.811218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.811253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.811624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.811658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.811890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.811921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.812290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.812323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.812685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.812719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.813082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.813112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.813476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.813508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.813876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.813910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.814279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.814310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.814663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.814696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.815059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.815090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.815469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.815501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.815733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.815768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.816025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.816056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.816276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.816308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.816684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.816717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.817079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.817110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.817460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.817491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.817848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.817879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.818241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.818273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.818643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.818684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.819039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.819071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.819300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.819332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.819716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.819748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.820124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.820156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.820559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.820591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.820952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.820985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.821360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.821392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.821757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.821790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.822169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.822202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.822577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-11-20 09:14:54.822608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-11-20 09:14:54.822971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.823002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.823387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.823420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.823735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.823765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.824046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.537 [2024-11-20 09:14:54.824089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.537 [2024-11-20 09:14:54.824098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.537 [2024-11-20 09:14:54.824105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.537 [2024-11-20 09:14:54.824112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.537 [2024-11-20 09:14:54.824117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.824148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.824508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.824539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.824902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.824934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.825310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.825344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.825710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.825741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.826111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.826142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.826376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:29.537 [2024-11-20 09:14:54.826501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.826536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.826665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:29.537 [2024-11-20 09:14:54.826880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.826909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.826830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:29.537 [2024-11-20 09:14:54.826832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:29.537 [2024-11-20 09:14:54.827182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.827216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.827586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.827619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.827871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.827901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.828265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.828298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.828676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.828707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.828943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.828974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.829341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.829373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.829742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.829774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.830122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.830156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.830515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.830547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.830817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.830849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.831198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.831231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.831462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.831493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.831859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.831890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.832132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.832173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.832550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.832588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.832941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.832972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.833208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.833242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.833492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.833524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.833872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.833904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.834269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.834302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.834556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.834587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-11-20 09:14:54.834820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-11-20 09:14:54.834852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.835216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.835250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.835607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.835638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.836002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.836032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.836417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.836449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.836800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.836833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.837195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.837228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.837603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.837634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.837983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.838014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.838394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.838427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.838671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.838701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.839095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.839127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.839502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.839535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.839791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.839820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.840184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.840217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.840593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.840623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.840852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.840883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.841223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.841258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.841520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.841550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.841908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.841940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.842304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.842338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.842580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.842615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.842876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.842907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.843143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.843182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.843547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.843579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.843945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.843977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.844232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.844263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.844632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.844665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.844920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.844950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.845304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.845337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.845699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.845731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.846084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.846118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.846527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.846560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.846801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.846838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.847198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.847231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.847400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.847429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.847774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.847804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.848036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-11-20 09:14:54.848067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-11-20 09:14:54.848312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.848344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.848707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.848739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.849093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.849125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.849375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.849412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.849753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.849784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.850147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.850191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.850562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.850592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.850951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.850981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.851344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.851377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.851743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.851776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.852049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.852079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.852305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.852337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.852707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.852739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.853020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.853049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.853284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.853317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.853659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.853691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.854056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.854087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.854520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.854551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.854908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.854942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.855185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.855218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.855585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.855616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.855978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.856009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.856366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.856399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.856767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.856798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.857208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.857243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.857564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.857595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.857961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.857991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.858331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.858361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.858618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.858648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.858807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.858838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.859116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.859148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.859398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.859430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.859808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.859839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.860091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.860121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.860522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.860556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.860820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.860858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.861240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.861273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.861519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-11-20 09:14:54.861549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-11-20 09:14:54.861924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.861956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.862383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.862416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.862627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.862658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.863012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.863043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.863403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.863436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.863711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.863743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.864017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.864047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.864398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.864428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.864803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.864834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.865066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.865101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.865502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.865535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.865798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.865832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.866197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.866231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.866621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.866653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.867002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.867035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.867271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.867302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.867661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.867691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.867931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.867960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.868339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.868370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.868736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.868767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.868989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.869023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.869253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.869284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.869662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.869694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.870072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.870104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.870482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.870515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.870746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.870778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.871122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.871154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.871526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.871556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.871832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.871861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.872276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.872308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.872664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.872694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.873045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.873077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.873424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.873458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.873809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.873840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.874083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.874114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.874526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.874559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.874851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.874882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-11-20 09:14:54.875231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-11-20 09:14:54.875268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.875532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.875566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.875799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.875831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.876188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.876220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.876575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.876605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.876737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.876767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.877112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.877144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.877434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.877464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.877815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.877846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.878203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.878237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.878604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.878634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.878848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.878877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.879257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.879289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.879707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.879740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.880070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.880102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.880478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.880513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.880871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.880903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.881312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.881345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.881618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.881648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.881865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.881896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.882114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.882144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.882502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.882533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.882899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.882930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.883298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.883331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.883687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.883718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.884075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.884107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.884383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.884416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.884769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.884800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.884908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.884936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.885203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.885237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.885492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.885522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.885781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.885811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.886045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.886075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-11-20 09:14:54.886323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-11-20 09:14:54.886354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.886737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.886770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.887120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.887150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.887531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.887562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.887934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.887964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.888348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.888380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.888739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.888770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.889126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.889172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.889514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.889546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.889771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.889801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.889938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.889969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.890211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.890243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.890486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.890515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.890626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.890658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.890899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.890931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.891306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.891338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.891576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.891606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.891983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.892014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.892383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.892414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.892630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.892660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.893012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.893042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.893423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.893457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.893665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.893697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.893913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.893944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.894322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.894353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.894559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.894588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.894955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.894985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.895328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.895361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.895742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.895773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.896154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.896192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.896569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.896601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.896857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.896886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.897142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.897182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.897586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.897617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.897850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.897881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.898277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.898310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.898690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.898720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.899071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-11-20 09:14:54.899101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-11-20 09:14:54.899468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.899503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.899750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.899780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.900145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.900184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.900403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.900434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.900779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.900808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.901023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.901054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.901466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.901497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.901915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.901947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.902171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.902203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.902583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.902620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.902985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.903015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.903242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.903274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.903633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.903664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.904035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.904066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.904408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.904438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.904691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.904724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.905069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.905101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.905485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.905516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.905884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.905915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.906127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.906173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.906347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.906377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.906613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.906642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.906881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.906914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.907308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.907341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.907733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.907765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.908026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.908057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.908328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.908360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.908675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.908709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.909074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.909106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.909477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.909509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.909765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.909798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.910204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.910235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.910598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.910628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.910902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.910933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.911316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.911347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.911728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.911760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.911981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.912012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.912353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-11-20 09:14:54.912383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-11-20 09:14:54.912783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.912812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.913069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.913101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.913346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.913377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.913642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.913671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.914000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.914032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.914267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.914298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.914662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.914691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.914919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.914949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.915169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.915202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.915564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.915595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.915949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.915980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.916297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.916334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.916591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.916620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.916971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.917001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.917333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.917365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.917682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.917713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.918062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.918093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.918321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.918353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.918610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.918639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.918868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.918899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.919273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.919304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.919684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.919715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.919962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.919994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.920213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.920246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.920453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.920483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.920836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.920867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.921231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.921262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.921629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.921659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.922011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.922043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.922262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.922293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.922507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.922538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.922906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.922936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.923167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.923197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.923525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.923555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.923909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.923940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.924301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.924332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.924564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.924593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-11-20 09:14:54.924935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-11-20 09:14:54.924964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.925177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.925207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.925575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.925606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.925929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.925959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.926305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.926336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.926709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.926740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.927097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.927128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.927543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.927576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.927913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.927946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.928310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.928342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.928691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.928724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.929074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.929105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.929458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.929488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.929710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.929739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.929986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.930023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.930265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.930296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.930668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.930699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.931042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.931072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.931293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.931324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.931686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.931716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.931893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.931923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.932282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.932315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.932667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.932696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.933058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.933087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.933464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.933496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.933858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.933888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.934238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.934269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.934506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.934535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.934856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.934886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.935105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.935135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.935426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.935455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.935781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.935810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.936175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.936207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.936561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-11-20 09:14:54.936591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-11-20 09:14:54.936802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.936831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.937089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.937124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.937383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.937415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.937773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.937805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.938054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.938086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.938316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.938347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.938593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.938628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.938980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.939012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.939292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.939324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.939678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.939710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.940025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.940054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.940364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.940394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.940733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.940765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.941107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.941138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.941513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.941544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.941904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.941935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.942186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.942219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.942576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.942607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.942973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.943005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.943346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.943376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.943645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.943680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.943791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.943825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.944185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.944216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.944566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.944597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.944824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.944854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.945148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.945188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.945498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.945529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.945885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.945914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.946275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.946308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.946667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.946696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.946920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.946949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.947312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.947344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.947442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.947472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.947807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.947839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.948064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.948094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.948327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.948358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.948730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.948760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-11-20 09:14:54.949124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-11-20 09:14:54.949154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.949503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.949533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.949905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.949935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.950291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.950324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.950562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.950592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.950949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.950979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.951333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.951365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.951723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.951753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.951969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.951998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.952398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.952429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.952533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.952562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.952792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.952822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.953199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.953230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.953437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.953467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.953775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.953805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.954049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.954078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.954415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.954448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.954796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.954827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.955184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.955216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.955563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.955593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.955935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.955966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.956200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.956231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.956483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.956515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.956723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.956761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.957113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.957143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.957363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.957392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.957747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.957777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.958013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.958042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.958285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.958320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.958656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.958688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.958900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.958929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.959290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.959321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.959669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.959698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.960064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.960094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.960464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.960496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.960857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.960887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.961123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.961153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.961527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.961558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-11-20 09:14:54.961904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-11-20 09:14:54.961934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.962146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.962185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.962551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.962582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.962949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.962981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.963324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.963356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.963736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.963766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.964012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.964044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.964409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.964440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.964652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.964681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.964803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.964832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.965220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.965252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.965620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.965651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.965858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.965889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.966229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.966261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.966578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.966607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.966954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.966986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.967241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.967272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.967619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.967650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.968003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.968032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.968362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.968392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.968625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.968654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.968936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.968964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.969174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.969207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.969441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.969472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.969816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.969848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.970058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.970093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.970453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.970485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.970854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.970885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.971269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.971301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.971517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.971547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.971903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.971934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.972291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.972323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.972676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.972705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.972943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.972972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.973332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.973363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.973674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.973704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.974054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.974084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.974320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-11-20 09:14:54.974351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-11-20 09:14:54.974710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.974740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.974983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.975013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.975382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.975414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.975754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.975786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.976125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.976154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.976522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.976553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.976919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.976950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.977305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.977336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.977691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.977722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.978056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.978088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.978440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.978471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.978820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.978851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.979067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.979098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.979344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.979377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.979747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.979779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.980110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.980141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.980394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.980425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.980798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.980829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.981188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.981219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.981442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.981472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.981805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.981836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.982041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.982070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.982322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.982352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.982662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.982693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.983041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.983071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.983413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.983444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.983809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.983839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.984176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.984214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.984329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.984360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.984615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.984645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.984998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.985028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.985279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.985313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.985558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.985588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.985960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.985991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.986337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.986369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.986714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.986743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.987105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.987135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.987503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-11-20 09:14:54.987535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-11-20 09:14:54.987896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.987927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.988296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.988327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.988682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.988713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.989094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.989125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.989274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.989303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.989682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.989712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.989951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.989981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.990322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.990353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.990579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.990612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.990813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.990843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.991187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.991219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.991528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.991559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.991910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.991939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.992301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.992334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.992579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.992609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.992917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.992945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.993294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.993326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.993579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.993609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.993969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.993998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.994351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.994384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.994596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.994626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.994991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.995021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.995385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.995417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.995778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.995807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.996154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.996196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.996566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.996596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.996963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.996993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.997207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.997239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.997587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.997618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.997974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.998009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.998346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.998379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.998750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.998781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.999126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-11-20 09:14:54.999156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-11-20 09:14:54.999518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:54.999548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:54.999892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:54.999923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.000137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.000177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.000552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.000584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.000945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.000975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.001317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.001347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.001693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.001722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.002109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.002139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.002489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.002520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.002862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.002893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.003236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.003269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.003638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.003669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.004035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.004065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.004424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.004456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.004808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.004839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.005180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.005212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.005553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.005583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.005809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.005838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.006149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.006188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.006442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.006472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.006822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.006852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.007213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.007246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.007596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.007626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.007966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.008004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.008382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.008414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.008587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.008617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.008964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.008994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.009326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.009358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.009715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.009744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.009963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.009993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.010355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.010386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.010730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.010763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.010968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.010998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.011324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.011356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.011731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.011761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.012116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.012147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.012423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.012453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-11-20 09:14:55.012797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-11-20 09:14:55.012829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.013064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.013095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.013342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.013372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.013742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.013774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.014128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.014167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.014519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.014550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.014890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.014921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.015180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.015212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.015447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.015477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.015763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.015792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.016123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.016152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.016509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.016541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.016893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.016923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.017282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.017314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.017684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.017715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.018079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.018108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.018336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.018366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.018696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.018726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.019129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.019175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.019541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.019571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.019923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.019953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.020299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.020332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.020696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.020726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.020838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.020869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.021090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.021119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.021377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.021409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.021527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.021561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.021860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.021891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.022092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.022122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.022497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.022528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.022878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.022910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.023262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.023293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.023659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.023690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.023909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.023939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.024282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.024315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.024663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.024694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.024904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.024933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.025287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-11-20 09:14:55.025318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-11-20 09:14:55.025689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.025720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.026065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.026095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.026451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.026483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.026688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.026717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.027063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.027092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.027451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.027482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.027737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.027770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.028104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.028135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.028501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.028531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.028878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.028908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.029267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.029300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.029677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.029706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.030063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.030095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.030455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.030485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.030822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.030854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.031060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.031092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.031470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.031502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.031839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.031870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.032123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.032153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.032554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.032584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.032923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.032953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.033178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.033211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.033619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.033650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.033989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.034020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.034400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.034430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.034777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.034810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.035170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.035201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.035556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.035588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.035803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.035838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.036189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.036222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.036583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.036613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.036951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.036982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.037200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.037231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.037600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.037630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.037984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.038012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.038243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.038273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.038686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.038715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-11-20 09:14:55.038921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-11-20 09:14:55.038950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.039191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.039224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.039608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.039638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.039882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.039915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.040248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.040280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.040655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.040685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.040940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.040970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.041309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.041340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.041621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.041651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.041989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.042019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.042277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.042308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.042673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.042704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.042922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.042952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.043195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.043228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.043544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.043574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.043926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.043957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.044316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.044346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.044569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.044598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.044973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.045004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.045390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.045423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-11-20 09:14:55.045769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-11-20 09:14:55.045799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.828 [2024-11-20 09:14:55.046006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.828 [2024-11-20 09:14:55.046038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.828 qpair failed and we were unable to recover it. 00:29:29.828 [2024-11-20 09:14:55.046382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.828 [2024-11-20 09:14:55.046414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.828 qpair failed and we were unable to recover it. 00:29:29.828 [2024-11-20 09:14:55.046760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.828 [2024-11-20 09:14:55.046791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.828 qpair failed and we were unable to recover it. 00:29:29.828 [2024-11-20 09:14:55.047138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.828 [2024-11-20 09:14:55.047187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.828 qpair failed and we were unable to recover it. 00:29:29.828 [2024-11-20 09:14:55.047521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.828 [2024-11-20 09:14:55.047551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.828 qpair failed and we were unable to recover it. 00:29:29.828 [2024-11-20 09:14:55.047759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.828 [2024-11-20 09:14:55.047788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.048139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.048178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.048586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.048617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.048977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.049007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.049402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.049435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.049788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.049824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.050199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.050232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.050476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.050507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.050846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.050878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.051220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.051251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.051616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.051647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.051996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.052028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.052244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.052276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.052518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.052548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.052905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.052935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.053287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.053323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.053693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.053722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.054059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.054090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.054340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.054374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.054596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.054626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.054976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.055006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.055369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.055400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.055732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.055762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.056103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.056135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.056485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.056515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.056866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.056896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.057245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.057276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.057642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.057671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.058024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.058055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.058403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.058434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.058785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.058816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.059037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.059068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.059314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.059346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.059711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.059742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.060079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.060109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.060470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.060502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.060848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.060879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.061231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.829 [2024-11-20 09:14:55.061263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.829 qpair failed and we were unable to recover it. 00:29:29.829 [2024-11-20 09:14:55.061486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.061516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.061737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.061771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.062103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.062133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.062458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.062489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.062708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.062741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.062948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.062981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.063321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.063353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.063592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.063626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.063719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.063748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.064009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.064039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.064376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.064406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.064756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.064786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.065152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.065194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.065427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.065456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.065792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.065822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.066183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.066215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.066576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.066606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.066971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.067000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.067278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.067309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.067553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.067582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.067917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.067947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.068284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.068316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.068522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.068550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.068758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.068787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.069023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.069054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.069428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.069459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.069652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.069682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.069908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.069940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.070315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.070346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.070703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.070734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.070949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.070979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.071330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.071363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.071721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.071751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.072090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.072122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.072357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.072389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.072728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.072760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.073096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.073127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-20 09:14:55.073510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.830 [2024-11-20 09:14:55.073542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.073878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.073908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.074276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.074309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.074621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.074651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.074987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.075017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.075385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.075418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.075751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.075781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.076122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.076153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.076394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.076424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.076771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.076801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.077017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.077054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.077398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.077428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.077639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.077668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.077911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.077941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.078274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.078306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.078512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.078541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.078742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.078772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.078998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.079031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.079359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.079393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.079606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.079639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.079882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.079910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.080184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.080221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.080574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.080605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.080818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.080847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.081187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.081219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.081564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.081596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.081931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.081960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.082308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.082341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.082581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.082611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.082871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.082900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.083115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.083148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.083399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.083428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.083675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.083707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.084060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.084090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.084451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.084483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.084839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.084868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.085084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.085113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.085459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.085492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-20 09:14:55.085859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.831 [2024-11-20 09:14:55.085888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.086244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.086276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.086649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.086679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.086908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.086937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.087295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.087325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.087676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.087707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.088041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.088070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.088432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.088462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.088795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.088826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.089047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.089075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.089431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.089461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.089831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.089861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.090222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.090258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.090687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.090717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.091079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.091109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.091503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.091534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.091880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.091910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.092279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.092309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.092623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.092654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.092993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.093022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.093390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.093420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.093754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.093783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.094149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.094189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.094559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.094588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.094939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.094968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.095191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.095222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.095578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.095609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.095966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.095997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.096335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.096366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.096707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.096738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.097087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.097116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.097493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.097524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.097861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.097891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.098231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.098262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.098476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.098505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.098720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.098749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.099096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.099125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.099513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.832 [2024-11-20 09:14:55.099545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-11-20 09:14:55.099877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.099905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.100242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.100273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.100642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.100672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.100883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.100911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.101276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.101307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.101672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.101701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.102036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.102067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.102418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.102448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.102825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.102855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.103186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.103217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.103569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.103599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.103937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.103967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.104370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.104401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.104735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.104766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.105120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.105155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.105481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.105510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.105722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.105755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.105966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.105995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.106088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.106117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.106482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.106512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.106872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.106901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.107250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.107281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.107640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.107670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.108005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.108035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.108232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.108263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.108491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.108520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.108939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.108969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.109308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.109337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.109601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.109631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.109989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.110019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.110385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.110416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.833 [2024-11-20 09:14:55.110753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.833 [2024-11-20 09:14:55.110782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.833 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.111029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.111058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.111398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.111429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.111764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.111794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.112136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.112175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.112539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.112568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.112912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.112941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.113280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.113312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.113678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.113707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.113803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.113832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.114091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.114119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.114375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.114405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.114748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.114777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.115147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.115185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.115508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.115538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.115891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.115920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.116062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.116093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.116333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.116363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.116717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.116747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.117099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.117129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.117500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.117531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.117882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.117913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.118225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.118256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.118611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.118647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.118898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.118931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.119206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.119237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.119457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.119485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.119846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.119874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.120231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.120262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.120498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.120531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.120796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.120824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.121167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.121198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.121569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.121597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.121954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.121983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.122331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.122362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.122571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.122599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.122706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.122738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.123137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.123182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.834 [2024-11-20 09:14:55.123543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.834 [2024-11-20 09:14:55.123573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.834 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.123914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.123945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.124283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.124315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.124666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.124697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.125042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.125072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.125381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.125411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.125764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.125793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.126134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.126174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.126566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.126595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.126821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.126850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.127208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.127239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.127517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.127546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.127897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.127928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.128301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.128332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.128549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.128577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.128787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.128815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.129155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.129192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.129431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.129460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.129814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.129843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.130075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.130103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.130464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.130494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.130832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.130861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.130957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.130984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.131247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5be00 is same with the state(6) to be set 00:29:29.835 [2024-11-20 09:14:55.131765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.131814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.132148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.132195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.132609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.132704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.132978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.133014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.133391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.133425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.133733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.133762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.134048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.134080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.134339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.134369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.134739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.134768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.135111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.135141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.135536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.135567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.135875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.135904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.136116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.136145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.136357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.136387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.835 qpair failed and we were unable to recover it. 00:29:29.835 [2024-11-20 09:14:55.136476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.835 [2024-11-20 09:14:55.136505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.136739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.136780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.137028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.137057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.137255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.137288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.137635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.137664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.138020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.138050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.138402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.138433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.138550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.138583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.138940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.138970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.139209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.139240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.139631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.139661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.140024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.140055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.140286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.140316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.140663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.140693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.141042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.141071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.141434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.141465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.141685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.141713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.142062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.142091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.142457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.142489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.142841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.142870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.143234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.143265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.143638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.143668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.144010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.144040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.144308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.144339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.144689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.144719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.145062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.145092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.145334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.145365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-11-20 09:14:55.145721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-11-20 09:14:55.145751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.146030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.146060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.146399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.146430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.146782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.146812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.147175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.147207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.147427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.147457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.147658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.147686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.147820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.147850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.148173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.148204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.148465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.148493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.148861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.148891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.149112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.149141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.149487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.149518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.149845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.149875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.150089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.150125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.150456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.150487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.150884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.150914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.151168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.151206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.151540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.151570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.151915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.151945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.152207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.152245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.152613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.152643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.152849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.152878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.153103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.153131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.153513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.153544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.153904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.153933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.154277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.154308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.154673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.154702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.154953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.154983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.155316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.155347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-11-20 09:14:55.155660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-11-20 09:14:55.155690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.156029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.156058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.156398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.156429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.156634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.156663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.156868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.156899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.157291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.157321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.157673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.157703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.157939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.157969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.158211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.158240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.158601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.158631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.158861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.158895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.159248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.159280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.159495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.159523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.159881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.159911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.160148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.160189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.160396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.160426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.160663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.160696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.161047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.161078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.161413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.161444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.161802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.161831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.162217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.162247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.162606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-11-20 09:14:55.162636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-11-20 09:14:55.162974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.163003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.163395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.163426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.163649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.163684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.163928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.163957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.164083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.164117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.164339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.164370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.164627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.164655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.164990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.165019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.165382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.165414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.165761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.165791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.166034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.166064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.166404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.166434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.166640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.166668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.166881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.166912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.167155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.167194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.167543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.167573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.167956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.167986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.168336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.168366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.168731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.168762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.169089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.169119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.169339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.169369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.169699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.169729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.170078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.170108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.170460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.170491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.170846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.170876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.171087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.171117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.171485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.171516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.171746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.171774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.172124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.172153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.172502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.172532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.172779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.172808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.173152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.173195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.173389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-11-20 09:14:55.173419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-11-20 09:14:55.173760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.173790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.173991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.174021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.174391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.174422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.174779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.174808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.175179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.175209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.175550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.175581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.175935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.175965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.176223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.176257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.176583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.176613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.176839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.176878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.177225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.177277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.177628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.177657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.177888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.177917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.178292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.178326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.178555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.178584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.178783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.178813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.179037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.179066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.179269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.179299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.179606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.179635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.179988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.180018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.180327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.180358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.180595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.180623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.180827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.180856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.181201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.181232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.181578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.181608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.181851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.181880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.182231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.182263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.182606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.182634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.182976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.183004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.183205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.183235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-11-20 09:14:55.183633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-11-20 09:14:55.183661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.183856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.183885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.184096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.184127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.184507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.184539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.184886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.184917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.185266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.185296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.185635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.185667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.185869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.185897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.186129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.186157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.186401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.186430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.186827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.186857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.187227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.187258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.187611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.187641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.188013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.188041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.188393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.188424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.188695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.188725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.189057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.189087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.189299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.189329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.189549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.189577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.189924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.189965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.190305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.190337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.190446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.190473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.190714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.190742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.191120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.191149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.191511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.191541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.191779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.191808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.192105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.192134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.192513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.192544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.192901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.192931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.193293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.193324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.193681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.193711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.194083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.194114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.194471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.194501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-11-20 09:14:55.194709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-11-20 09:14:55.194739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.194941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.194970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.195322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.195353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.195697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.195726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.195931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.195960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.196300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.196330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.196688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.196718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.196951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.196981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.197193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.197224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.197596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.197627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.197975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.198004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.198157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.198196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.198428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.198462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.198669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.198699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.199043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.199073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.199458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.199491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.199828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.199858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.200240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.200271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.200485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.200513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.200814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.200843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.201185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.201217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.201546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.201574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.201908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.201937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.202282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.202311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.202514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.202542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.202903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.202932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.203155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.203211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.203585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.203615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.204005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.204034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.204396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.204427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-11-20 09:14:55.204760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-11-20 09:14:55.204793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.205146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.205188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.205512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.205543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.205892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.205920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.206132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.206168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.206529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.206560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.206925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.206955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.207179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.207209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.207563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.207592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.207851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.207879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.208212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.208242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.208442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.208471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.208816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.208846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.209180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.209211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.209546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.209576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.209925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.209955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.210323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.210354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.210698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.210728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.211119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.211149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.211512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.211543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.211886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.211916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.212148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.212190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.212561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.212591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.212949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.212978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.213209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.213239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.213590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.213619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.213955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.213985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.214329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.214360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-11-20 09:14:55.214585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-11-20 09:14:55.214615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.214957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.214986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.215328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.215359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.215561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.215591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.215941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.215971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.216307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.216337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.216680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.216709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.216940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.216970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.217313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.217349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.217675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.217705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.218033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.218063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.218257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.218286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.218600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.218630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.218969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.218998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.219288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.219319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.219683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.219714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.219923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.219951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.220329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.220360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.220721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.220751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.220976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.221006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.221322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.221353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.221534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.221562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.221894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.221924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.222291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.222321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-11-20 09:14:55.222657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-11-20 09:14:55.222686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.223035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.223064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.223421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.223452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.223785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.223815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.224176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.224208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.224560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.224589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.224961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.224990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.225373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.225404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.225734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.225764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.226111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.226140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.226470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.226500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.226834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.226866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.227195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.227225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.227455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.227487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.227833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.227863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.228209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.228240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.228610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.228639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.228973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.229003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.229219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.229250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.229595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.229624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.229977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.230005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.230376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.230408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.230609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.230640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.231026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.231056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.231261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.231298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.231631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.231660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.232020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.232049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.232399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.232429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.232787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.232816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.233183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.233214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.233560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.233590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-11-20 09:14:55.233925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-11-20 09:14:55.233954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.234334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.234366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.234698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.234727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.235076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.235106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.235379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.235409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.235763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.235790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.235999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.236026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.236380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.236411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.236755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.236783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.237128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.237168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.237515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.237543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.237891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.237921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.238254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.238286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.238650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.238678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.238898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.238926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.239186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.239216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.239554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.239583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.239927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.239956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.240198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.240228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.240470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.240500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.240847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.240878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.241090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.241119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.241470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.241501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.241849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.241879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.242231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.242260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.242605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.242634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.242879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.242907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.243297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-11-20 09:14:55.243327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-11-20 09:14:55.243524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.243552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.243777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.243807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.244038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.244065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.244290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.244324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.244709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.244738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.245075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.245111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.245467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.245498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.245729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.245757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.245956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.245984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.246332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.246364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.246707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.246736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.247091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.247121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.247469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.247500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.247843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.247871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.248246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.248277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.248512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.248541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.248882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.248912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.249143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.249181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.249541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.249570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.249934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.249964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.250309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.250339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.250679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.250708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.250943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.250972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.251200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.251229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.251585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.251614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.251987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.252016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.252376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.252407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.252647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.252675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.252913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.252942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.253304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.253334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.253677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.253705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-11-20 09:14:55.254029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-11-20 09:14:55.254059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.254417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.254447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.254801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.254830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.255205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.255235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.255472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.255501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.255704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.255733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.255947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.255976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.256203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.256235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.256566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.256596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.256942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.256971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.257204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.257234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.257439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.257468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.257819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.257848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.258197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.258227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.258436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.258475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.258682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.258711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.259014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.259043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.259309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.259339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.259648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.259676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.260031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.260059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.260313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.260346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.260544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.260574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.260792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.260820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.261055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.261083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.261295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.261326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.261624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.261654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.261984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.262014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.262251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.262285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.262653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.262684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-11-20 09:14:55.263028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-11-20 09:14:55.263057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.263467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.263498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.263710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.263739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.263974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.264003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.264353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.264384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.264710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.264741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.264938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.264967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.265326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.265356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.265656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.265685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.266039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.266068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.266402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.266434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.266764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.266793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.267175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.267207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.267559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.267589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.267784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.267812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.268026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.268059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.268416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.268448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.268657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.268685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.269035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.269064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.269431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.269461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.269829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.269858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.270194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.270226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.270613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.270642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.270979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.271009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.271354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.271384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.271742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.271771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.272123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.272152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.272392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.272424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.272658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.272687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.273016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.273046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.273394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.273426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.273770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-11-20 09:14:55.273800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-11-20 09:14:55.274139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.274178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.274518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.274547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.274917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.274946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.275284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.275314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.275662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.275691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.275784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.275811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0628000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.276286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.276381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.276797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.276836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.277072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.277104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.277543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.277636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.278052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.278090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.278296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.278330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.278639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.278669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.279014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.279044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.279409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.279440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.279783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.279812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.280004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.280034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.280372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.280403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.280775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.280804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.281155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.281196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.281547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.281590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.281940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.281969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.282233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.282263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-11-20 09:14:55.282623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-11-20 09:14:55.282652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.283009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.283037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.283251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.283282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.283624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.283653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.283846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.283874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.284203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.284234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.284578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.284608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.284916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.284945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.285315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.285345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.285710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.285738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.285933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.285961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.286352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.286385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.286744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.286773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.286968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.286998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.287358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.287390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.287719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.287748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.288045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.288075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.288297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.288326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.288625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.288654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.288862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.288890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.289234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.289264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.289603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.289632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.289987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.290016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.290326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.290358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.290688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.290717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.291074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.291104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.291562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.291593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.291973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.292002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.292321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.292352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.292556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.292585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.292793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-11-20 09:14:55.292822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-11-20 09:14:55.293181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.293213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.293520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.293554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.293903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.293935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.294286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.294318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.294550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.294579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.294934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.294964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.295331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.295362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.295713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.295749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.295966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.295995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.296086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.296114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.296348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.296383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.296707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.296737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.297091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.297120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.297339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.297369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.297706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.297735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.298062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.298091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.298445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.298476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.298682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.298710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.299057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.299086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.299440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.299472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.299808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.299838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.300191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.300224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.300569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.300600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.300943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.300972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.301329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.301359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.301724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.301753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.301966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.301995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.302327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.302357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.302577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.302605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.302960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.302990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.303219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-11-20 09:14:55.303247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-11-20 09:14:55.303582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.303611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.304006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.304035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.304259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.304289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.304552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.304586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.304916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.304947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.305183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.305213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.305564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.305593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.305836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.305864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.306067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.306096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.306409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.306438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.306782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.306811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.307135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.307173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.307398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.307426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.307770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.307800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.308003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.308032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.308378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.308408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.308506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.308534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.308865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.308896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.309222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.309251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.309594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.309623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.309955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.309985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.310209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.310240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.310580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.310609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.310965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.310994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.311237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.311271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.311503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.311532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.311880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.311910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.312229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.312260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.312609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.312638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.312960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.312989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.313343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.313375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.313728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.313757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.314094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.314122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-11-20 09:14:55.314500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-11-20 09:14:55.314531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.314859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.314887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.315233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.315263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.315623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.315652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.315890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.315922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.316213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.316243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.316558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.316588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.316922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.316951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.317305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.317335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.317693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.317722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.317944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.317972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.318290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.318326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.318535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.318563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.318907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.318936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.319283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.319314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.319664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.319693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.319890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.319918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.320226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.320256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.320603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.320633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.320846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.320873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.321247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.321277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.321631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.321660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.322010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.322039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.322168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.322198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.322550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.322579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.322817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.322845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.323054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.323082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-11-20 09:14:55.323309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-11-20 09:14:55.323340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.323692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.323721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.324072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.324101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.324454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.324485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.324850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.324879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.325234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.325264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.325457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.325485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.325844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.325875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.326087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.326119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.326343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.326372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.326601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.326630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.326847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.326875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.327226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.327257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.327630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.327659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.328044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.328073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.328321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.328352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.328443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.328471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.328774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.328802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.329152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.329192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.329533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.329562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.329912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.329941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.330292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.330322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.330689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.330718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.330940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.330968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.331207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.331238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.331591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.331621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.331974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.332003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.332359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.332390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.332718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.332748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.333091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.333120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-11-20 09:14:55.333333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-11-20 09:14:55.333363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.333718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.333747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.333956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.333984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.334328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.334360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.334687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.334716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.335077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.335106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.335465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.335496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.335846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.335876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.336231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.336263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.336647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.336678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.337007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.337036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.337388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.337419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.337791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.337820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.338175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.338206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.338566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.338595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.338930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.338959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.339258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.339287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.339638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.339667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.340002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.340031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.340357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.340386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.340590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.340619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.340960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.340989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.341229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.341269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.341576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.341605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.341944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.341973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-11-20 09:14:55.342330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-11-20 09:14:55.342360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.342583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.342613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.342954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.342985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.343340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.343370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.343729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.343758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.343984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.344012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.344367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.344400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.344596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.344626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.344959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.344988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.345330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.345361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.345704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.345733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.346086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.346116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.346447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.346478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.346833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.346862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.347181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.347211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.347530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.347559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.347897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.347925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.348194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.348225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.348578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.348607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.348904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.348933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.349261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.349292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.349629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.349658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.349868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.349896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.350246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.350277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.350504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.137 [2024-11-20 09:14:55.350531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.137 qpair failed and we were unable to recover it. 00:29:30.137 [2024-11-20 09:14:55.350893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.350922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.351167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.351197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.351561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.351590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.351939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.351967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.352320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.352350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.352672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.352702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.353056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.353085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.353278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.353308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.353630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.353659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.353749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.353776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.353886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.353918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.354282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.354312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.354504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.354533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.354883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.354920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.355133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.355169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.355368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.355397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.355741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.355770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.356125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.356153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.356474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.356503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.356861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.356891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.357247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.357278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.357635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.357665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.358024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.358054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.358274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.358305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.358605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.358633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.358986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.359015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.359410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.359440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.359787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.359817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.360067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.360096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.360457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.360487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.360783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.360811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.361173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.361203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.361545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.361576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.361921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.361950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.362300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.362331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.362538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.362566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.362904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.362933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.138 [2024-11-20 09:14:55.363313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.138 [2024-11-20 09:14:55.363344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.138 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.363544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.363571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.363934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.363962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.364223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.364259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.364604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.364632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.365008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.365037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.365348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.365378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.365729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.365758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.366102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.366131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.366412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.366442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.366774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.366803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.367154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.367195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.367546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.367574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.367788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.367816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.368190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.368221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.368546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.368575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.368910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.368938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.369172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.369203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.369557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.369588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.369937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.369966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.370299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.370330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.370676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.370705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.371050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.371077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.371305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.371335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.371687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.371717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.371813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.371840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.372175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.372205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.372428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.372462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.372796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.372825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.373181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.373211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.373548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.373577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.373947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.373977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.374068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.374095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.374595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.374687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.375053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.375090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.375318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.375355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.375728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.375758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.376080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.376110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:30.139 qpair failed and we were unable to recover it. 00:29:30.139 [2024-11-20 09:14:55.376471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.139 [2024-11-20 09:14:55.376564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0634000b90 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.376887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.376921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.377257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.377289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.377510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.377538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.377735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.377765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.377996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.378024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.378361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.378392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.378738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.378767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.378854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.378881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.379140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.379177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.379539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.379568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.379776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.379804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.380014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.380043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.380264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.380297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.380686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.380716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.381015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.381043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.381395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.381425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.381770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.381798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.382136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.382175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.382477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.382506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.382748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.382777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.383082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.383111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.383453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.383483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.383830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.383859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.384194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.384224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.384540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.384570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.384795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.384828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.385176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.385207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.385626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.385656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.385854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.385883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.386247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.386278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.386619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.386649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.386994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.387022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.387375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.387411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.387621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.387650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.387860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.387889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.388283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.388313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.388670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.388700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.140 [2024-11-20 09:14:55.388892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.140 [2024-11-20 09:14:55.388920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.140 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.389284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.389315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.389535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.389564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.389939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.389968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.390316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.390346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.390651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.390682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.390794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.390821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.391067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.391096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.391462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.391493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.391827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.391858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.392089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.392117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.392394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.392426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.392770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.392798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.393173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.393204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.393468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.393496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.393832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.393861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.394251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.394281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.394503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.394531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.394894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.394923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.395306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.395336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.395572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.395600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.395960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.395988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.396206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.396236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.396521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.396550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.396890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.396920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.397020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.397047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.397407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.397437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.397750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.397780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.397985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.398014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.398440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.398470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.398702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.398730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.398957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.398987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.399078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.399105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.399364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.141 [2024-11-20 09:14:55.399393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.141 qpair failed and we were unable to recover it. 00:29:30.141 [2024-11-20 09:14:55.399751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.399780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.400153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.400190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.400396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.400425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.400779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.400809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.401008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.401037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.401376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.401407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.401629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.401659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.401898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.401927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.402268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.402299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.402665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.402694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.403043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.403072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.403410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.403441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.403808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.403836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.404083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.404111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.404254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.404286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.404514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.404542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.404890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.404920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.405282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.405312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.405675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.405704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.406048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.406075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.406309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.406339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.406549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.406578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.406825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.406855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.407093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.407122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.407355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.407384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.407732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.407762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.408123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.408153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.408510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.408538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.408823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.408852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.409241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.409282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.409649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.409678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.410014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.410043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.410255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.410286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-11-20 09:14:55.410490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-11-20 09:14:55.410519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.410869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.410898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.411112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.411142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.411499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.411528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.411753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.411783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.412085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.412115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.412488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.412519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.412865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.412896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.413251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.413283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.413493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.413521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.413743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.413772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.414034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.414062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.414413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.414444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.414805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.414834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.415188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.415218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.415456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.415484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.415815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.415846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.416199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.416230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.416576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.416605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.416734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.416762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.416868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.416897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.417137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.417173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.417384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.417414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.417772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.417801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.418025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.418054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.418289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.418321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.418532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.418561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.418791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.418820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.419217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.419247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.419623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.419652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.419993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.420022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.420342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.420373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.420704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.420733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.420967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.420995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.421312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.421343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.421693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.421723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.422056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.422086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.422431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.422469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-11-20 09:14:55.422792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-11-20 09:14:55.422823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.423148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.423187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.423419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.423452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.423792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.423821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.424033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.424061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.424400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.424431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.424822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.424851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.424942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.424970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.425300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.425331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.425688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.425716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.426094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.426123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.426455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.426486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.426820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.426850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.427207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.427239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.427578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.427607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.427962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.427991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.428350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.428381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.428712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.428741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.428958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.428988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.429328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.429358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.429708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.429736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.429943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.429971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.430318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.430349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.430689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.430718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.431064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.431093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.431541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.431572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.431914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.431949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.432334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.432365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.432593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.432623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.432862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.432894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.433223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.433253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.433626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.433656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.433987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.434017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.434399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.434429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.434763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.434793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.434985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-11-20 09:14:55.435014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-11-20 09:14:55.435375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.435406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.435609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.435638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.435986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.436015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.436326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.436357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.436706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.436737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.437068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.437097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.437436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.437466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.437804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.437833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.438047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.438076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.438418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.438449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.438792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.438821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.439185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.439216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.439582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.439612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.439948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.439977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.440220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.440250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.440476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.440504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.440847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.440876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.441246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.441277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.441627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.441656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.441986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.442015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.442382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.442413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.442738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.442766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.442993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.443022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.443302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.443343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.443667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.443696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.444039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.444069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.444420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.444452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.444805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.444833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.445192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.445222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.445587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.445617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.445833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.445862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.446184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.446221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.446593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.446622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.446953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.446983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.447347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.447377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.447691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.447721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.448063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.448091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.448297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-11-20 09:14:55.448327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-11-20 09:14:55.448706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.448736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.449040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.449068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.449443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.449474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.449834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.449865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.450131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.450169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.450389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.450418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.450789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.450818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.451052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.451084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.451330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.451361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.451690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.451719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.451934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.451965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.452376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.452408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.452752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.452781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.452995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.453023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.453136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.453172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Write completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Write completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Write completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Write completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Write completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Write completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Write completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Write completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Write completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Write completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Read completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 Write completed with error (sct=0, sc=8) 00:29:30.146 starting I/O failed 00:29:30.146 [2024-11-20 09:14:55.453922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.146 [2024-11-20 09:14:55.454501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.454605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f062c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.454940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.454972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.455304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.455335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.455541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.455570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.455833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.455862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.456188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.456219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-11-20 09:14:55.456464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-11-20 09:14:55.456492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.456867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.456896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.457105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.457133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.457373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.457403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.457764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.457793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.458115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.458144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.458417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.458448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.458826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.458855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.459213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.459244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.459558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.459588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.459934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.459963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.460294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.460324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.460578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.460607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.460933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.460962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.461324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.461354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.461701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.461730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.462033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.462065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.462268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.462298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.462532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.462560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.462770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.462799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.463061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.463090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.463404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.463435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.463631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.463660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.463999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.464029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.464265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.464298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.464640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.464670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.465004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.465034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.465402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.465432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.465767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.465797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.466031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.466061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.466404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.466434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.466775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.466805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.467141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.467181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.467483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.467518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.467607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.467635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-11-20 09:14:55.467967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-11-20 09:14:55.467996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.468316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.468346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.468578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.468607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.468839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.468867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.469213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.469243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.469615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.469645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.469975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.470005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.470229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.470259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.470683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.470713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.470831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.470861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.471189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.471219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.471566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.471596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.471790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.471820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.472187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.472219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.472546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.472577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.472908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.472937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.473148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.473187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.473536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.473567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.473918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.473947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.474323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.474354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.474697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.474727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.474940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.474968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.475172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.475202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.475423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.475779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.475807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.476156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.476216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.476411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.476440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.476789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.476840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.477110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.477174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.477342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.477387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.477796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.148 [2024-11-20 09:14:55.477847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.478197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.478249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.478604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.478655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:30.148 [2024-11-20 09:14:55.478888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.478936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:30.148 [2024-11-20 09:14:55.479198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.479250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.148 [2024-11-20 09:14:55.479511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.479561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.479957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-11-20 09:14:55.480021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-11-20 09:14:55.480414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.480454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.480691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.480720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.481063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.481092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.481327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.481358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.481698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.481727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.482060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.482090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.482313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.482344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.482541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.482571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.482895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.482925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.483277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.483308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.483651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.483682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.484012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.484042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.484265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.484295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.484639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.484668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.484995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.485025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.485220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.485250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.485601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.485632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.485977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.486008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.486265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.486296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.486531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.486560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.486896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.486926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.487147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.487194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.487504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.487534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.487750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.487778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.488148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.488193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.488445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.488475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.488826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.488856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.489228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.489277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.489629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.489659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.490003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.490031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.490388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.490419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.490749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.490778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.491147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.491192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.491315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.491348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.491680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.491711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-11-20 09:14:55.492057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-11-20 09:14:55.492086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.492395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.492426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.492629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.492658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.492997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.493026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.493381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.493412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.493731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.493761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.494000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.494029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.494335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.494368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.494573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.494602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.494826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.494855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.495192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.495222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.495454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.495486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.495824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.495854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.496190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.496221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.496627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.496657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.497003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.497034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.497379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.497412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.497760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.497790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.498141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.498266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.498629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.498665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.498998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.499028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.499244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.499276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.499639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.499669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.500005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.500034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.500399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.500429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.500754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.500783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.501129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.501167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.501504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.501534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.501908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.501937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.502255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.502286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.502490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.502518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.502818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.502847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.503198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.503229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.503595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.503625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.503988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.504019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.504212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.504243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.504614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.504644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.504847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.504876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-11-20 09:14:55.505132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-11-20 09:14:55.505175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.505517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.505547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.505904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.505934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.506308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.506338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.506685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.506715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.507083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.507112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.507425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.507457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.507689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.507719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.507912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.507939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.508314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.508346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.508693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.508723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.509066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.509095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.509304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.509333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.509636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.509666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.509885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.509913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.510147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.510185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.510516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.510546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.510876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.510905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.511239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.511271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.511615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.511644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.511735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.511764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.512101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.512131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.512507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.512538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.512904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.512934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.513142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.513189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.513531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.513561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.513903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.513932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.514284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.514315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.514647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.514676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.515020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.515050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.515404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.515434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.515768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.515798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.516130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-11-20 09:14:55.516167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-11-20 09:14:55.516389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.516417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.516754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.516783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.152 [2024-11-20 09:14:55.517130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.517177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:30.152 [2024-11-20 09:14:55.517479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.517509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.152 [2024-11-20 09:14:55.517856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.152 [2024-11-20 09:14:55.517887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.518251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.518282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.518621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.518651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.518987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.519015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.519394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.519425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.519756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.519784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.520118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.520150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.520387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.520416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.520758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.520787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.521145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.521192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.521426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.521460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.521682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.521715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.522074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.522104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.522450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.522481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.522822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.522851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.523198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.523230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.523577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.523607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.523973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.524002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.524322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.524355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.524704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.524734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.525092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.525122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.525467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.525497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.525816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.525847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.526154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.526195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.526402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.526435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.526767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.526797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.527131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.527169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.527518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.527548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.527885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.527915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.528143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.528181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.528527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.528556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.528905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.528935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-11-20 09:14:55.529113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-11-20 09:14:55.529140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.529480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.529511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.529855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.529884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.530231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.530262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.530600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.530629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.530974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.531004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.531359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.531390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.531627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.531660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.532011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.532040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.532420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.532451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.532779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.532810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.533176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.533206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.533549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.533578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.533816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.533844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.534169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.534200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.534457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.534486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.534839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.534868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.535200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.535232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.535586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.535616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.535954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.535989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.536333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.536363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.536711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.536741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.537073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.537101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.537329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.537359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.537661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.537690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.538025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.538055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.538401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.538431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.538784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.538814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.539172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.539202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.539542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.539571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.539944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.539973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.540191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.540222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.540582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.540611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.540985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.541015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.541368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.541397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.541612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.541640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.541853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.541882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.542193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.542224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.542578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-11-20 09:14:55.542607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-11-20 09:14:55.542959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.542987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.543330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.543361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.543658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.543686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.544033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.544062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.544261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.544291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.544511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.544544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.544867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.544896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.545248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.545286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.545648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.545679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.546009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.546037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.546390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.546420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.546755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.546784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.547132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.547168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.547520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.547549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.547879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.547908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.548271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.548303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.548510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.548539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.548869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.548899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.549246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.549277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.549633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.549662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.550023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.550052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.550278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.550308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.550656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.550685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.551011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.551040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.551384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.551413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.551761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.551790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.552175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.552206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.552585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.552614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 Malloc0 00:29:30.154 [2024-11-20 09:14:55.552955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.552985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.553198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.553228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.154 [2024-11-20 09:14:55.553579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.553609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:30.154 [2024-11-20 09:14:55.553931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.553961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.154 [2024-11-20 09:14:55.554301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.554332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.154 [2024-11-20 09:14:55.554735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.554765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.555097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.555126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.555351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.555386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.555725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-11-20 09:14:55.555755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-11-20 09:14:55.556107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.556136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.556241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.556271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.556473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.556503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.556841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.556870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.557092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.557122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.557494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.557525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.557730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.557760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.558098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.558128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.558480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.558510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.558853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.558895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.559228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.559260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.559626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.559655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.560026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.560055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.560364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.155 [2024-11-20 09:14:55.560428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.560458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.560664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.560693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.560892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.560921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.561234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.561264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.561594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.561624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.561969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.561998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.562358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.562388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.562718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.562748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.563097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.563126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.563477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.563507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.563861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.563890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.564231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.564263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.564491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.564520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.564777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.564809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.565135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.565180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.565527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.565557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.565886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.565915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.566304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.566334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.566666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.566696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.566917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.566945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.567152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.567188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.567534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.567563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.567913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.567944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.568297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.568334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.568669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-11-20 09:14:55.568716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-11-20 09:14:55.568977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.569028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.156 [2024-11-20 09:14:55.569380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.569428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:30.156 [2024-11-20 09:14:55.569774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.569824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.156 [2024-11-20 09:14:55.570171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.570221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.156 [2024-11-20 09:14:55.570500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.570551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.570842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.570884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.571117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.571147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.571493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.571522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.571781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.571810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.572016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.572045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.572313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.572349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.572560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.572590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.572888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.572917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.573288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.573319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.573555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.573583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.573931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.573960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.574184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.574214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.574558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.574587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.574954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.574983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.575315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.575345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.575693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.575722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.575939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.575967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.576305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.576335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.576696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.576725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.576937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.576967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.577201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.577232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.577508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.577537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.577748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.577776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.578128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.578157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.578470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.578499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.578704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-11-20 09:14:55.578732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-11-20 09:14:55.579095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.579124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.579369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.579398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.579731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.579759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.579978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.580006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.580246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.580276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.580532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.580560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.580893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.580960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.581335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.581385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.581725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.581775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:30.157 [2024-11-20 09:14:55.582156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.157 [2024-11-20 09:14:55.582222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.157 [2024-11-20 09:14:55.582605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.582660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.583027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.583065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.583409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.583441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.583783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.583812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.584025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.584054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.584273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.584304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.584534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.584567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.584807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.584836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.585060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.585089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.585411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.585442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.585774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.585803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.586202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.586233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.586423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.586451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.586786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.586815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.587057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.587086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.587314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.587344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.587693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.587721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.588070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.588107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.588448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.588478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.588834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.588863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.588954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.588982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.589218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.589249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.589595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.589625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.589961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.589990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.590357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.590387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.590593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-11-20 09:14:55.590621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-11-20 09:14:55.590760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.590791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.591148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.591187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.591419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.591448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.591802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.591831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.592175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.592204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.592534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.592563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.592676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.592722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.593094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.593149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.158 [2024-11-20 09:14:55.593594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.593647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.158 [2024-11-20 09:14:55.593911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.593960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.158 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.158 [2024-11-20 09:14:55.594408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.594466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.594743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.594786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.595137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.595194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.595492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.595520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.595854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.595884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.596130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.596172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.596515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.596544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.596762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.596791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.597004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.597033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.597275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.597304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.597683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.597712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.598043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.598074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.598450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.598483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.598814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.598843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.599193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.599224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.599463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.599492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.599827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.599855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.600232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.600263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.600482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-11-20 09:14:55.600511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb660c0 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-11-20 09:14:55.600649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.158 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.158 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:30.158 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.158 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.158 [2024-11-20 09:14:55.611349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.158 [2024-11-20 09:14:55.611470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.158 [2024-11-20 09:14:55.611519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.158 [2024-11-20 09:14:55.611543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.158 [2024-11-20 09:14:55.611563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.158 [2024-11-20 09:14:55.611615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.158 09:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 882699 00:29:30.158 [2024-11-20 09:14:55.621206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.158 [2024-11-20 09:14:55.621340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.158 [2024-11-20 09:14:55.621371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.158 [2024-11-20 09:14:55.621388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.158 [2024-11-20 09:14:55.621402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.159 [2024-11-20 09:14:55.621434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-11-20 09:14:55.631276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.159 [2024-11-20 09:14:55.631375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.159 [2024-11-20 09:14:55.631399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.159 [2024-11-20 09:14:55.631411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.159 [2024-11-20 09:14:55.631421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.159 [2024-11-20 09:14:55.631444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-11-20 09:14:55.641258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.159 [2024-11-20 09:14:55.641319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.159 [2024-11-20 09:14:55.641333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.159 [2024-11-20 09:14:55.641340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.159 [2024-11-20 09:14:55.641347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.159 [2024-11-20 09:14:55.641361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 09:14:55.651275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.419 [2024-11-20 09:14:55.651347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.419 [2024-11-20 09:14:55.651361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.419 [2024-11-20 09:14:55.651369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.419 [2024-11-20 09:14:55.651375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.419 [2024-11-20 09:14:55.651390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 09:14:55.661200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.419 [2024-11-20 09:14:55.661259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.419 [2024-11-20 09:14:55.661273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.419 [2024-11-20 09:14:55.661281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.419 [2024-11-20 09:14:55.661288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.419 [2024-11-20 09:14:55.661302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 09:14:55.671169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.419 [2024-11-20 09:14:55.671233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.419 [2024-11-20 09:14:55.671249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.419 [2024-11-20 09:14:55.671256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.419 [2024-11-20 09:14:55.671263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.419 [2024-11-20 09:14:55.671278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 09:14:55.681311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.419 [2024-11-20 09:14:55.681415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.419 [2024-11-20 09:14:55.681429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.419 [2024-11-20 09:14:55.681438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.419 [2024-11-20 09:14:55.681445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.419 [2024-11-20 09:14:55.681458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 09:14:55.691388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.419 [2024-11-20 09:14:55.691452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.419 [2024-11-20 09:14:55.691466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.419 [2024-11-20 09:14:55.691474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.419 [2024-11-20 09:14:55.691480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.419 [2024-11-20 09:14:55.691494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 09:14:55.701354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.419 [2024-11-20 09:14:55.701432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.419 [2024-11-20 09:14:55.701450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.419 [2024-11-20 09:14:55.701457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.419 [2024-11-20 09:14:55.701464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.419 [2024-11-20 09:14:55.701478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 09:14:55.711444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.419 [2024-11-20 09:14:55.711505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.419 [2024-11-20 09:14:55.711519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.419 [2024-11-20 09:14:55.711526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.711532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.711547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.721461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.721538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.721551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.721559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.721565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.721579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.731473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.731525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.731538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.731546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.731552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.731566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.741447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.741493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.741506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.741513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.741520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.741537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.751423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.751478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.751491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.751499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.751505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.751519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.761545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.761597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.761610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.761618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.761624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.761637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.771590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.771643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.771657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.771664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.771671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.771684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.781552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.781601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.781614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.781622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.781628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.781642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.791611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.791669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.791682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.791690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.791696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.791710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.801664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.801722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.801736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.801744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.801750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.801763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.811654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.811706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.811720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.811727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.811734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.811748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.821682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.821729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.821742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.821749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.821755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.821769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.831952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.832011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.832032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.420 [2024-11-20 09:14:55.832039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.420 [2024-11-20 09:14:55.832045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.420 [2024-11-20 09:14:55.832059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 09:14:55.841789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.420 [2024-11-20 09:14:55.841872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.420 [2024-11-20 09:14:55.841886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.421 [2024-11-20 09:14:55.841894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.421 [2024-11-20 09:14:55.841900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.421 [2024-11-20 09:14:55.841914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 09:14:55.851845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.421 [2024-11-20 09:14:55.851905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.421 [2024-11-20 09:14:55.851919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.421 [2024-11-20 09:14:55.851926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.421 [2024-11-20 09:14:55.851933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.421 [2024-11-20 09:14:55.851947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 09:14:55.861827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.421 [2024-11-20 09:14:55.861912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.421 [2024-11-20 09:14:55.861925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.421 [2024-11-20 09:14:55.861932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.421 [2024-11-20 09:14:55.861939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.421 [2024-11-20 09:14:55.861953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 09:14:55.871854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.421 [2024-11-20 09:14:55.871909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.421 [2024-11-20 09:14:55.871922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.421 [2024-11-20 09:14:55.871930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.421 [2024-11-20 09:14:55.871937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.421 [2024-11-20 09:14:55.871955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 09:14:55.881874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.421 [2024-11-20 09:14:55.881933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.421 [2024-11-20 09:14:55.881946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.421 [2024-11-20 09:14:55.881954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.421 [2024-11-20 09:14:55.881960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.421 [2024-11-20 09:14:55.881974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 09:14:55.891908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.421 [2024-11-20 09:14:55.891965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.421 [2024-11-20 09:14:55.891979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.421 [2024-11-20 09:14:55.891986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.421 [2024-11-20 09:14:55.891993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.421 [2024-11-20 09:14:55.892007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 09:14:55.901879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.421 [2024-11-20 09:14:55.901926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.421 [2024-11-20 09:14:55.901942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.421 [2024-11-20 09:14:55.901950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.421 [2024-11-20 09:14:55.901958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.421 [2024-11-20 09:14:55.901972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 09:14:55.911949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.421 [2024-11-20 09:14:55.912002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.421 [2024-11-20 09:14:55.912016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.421 [2024-11-20 09:14:55.912023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.421 [2024-11-20 09:14:55.912030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.421 [2024-11-20 09:14:55.912044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 09:14:55.921979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.421 [2024-11-20 09:14:55.922037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.421 [2024-11-20 09:14:55.922050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.421 [2024-11-20 09:14:55.922058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.421 [2024-11-20 09:14:55.922065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.421 [2024-11-20 09:14:55.922078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 09:14:55.932053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.421 [2024-11-20 09:14:55.932111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.421 [2024-11-20 09:14:55.932125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.421 [2024-11-20 09:14:55.932133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.421 [2024-11-20 09:14:55.932139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.421 [2024-11-20 09:14:55.932153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 09:14:55.942004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.421 [2024-11-20 09:14:55.942061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.421 [2024-11-20 09:14:55.942074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.421 [2024-11-20 09:14:55.942082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.421 [2024-11-20 09:14:55.942088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.421 [2024-11-20 09:14:55.942102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.682 [2024-11-20 09:14:55.952116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.682 [2024-11-20 09:14:55.952182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.682 [2024-11-20 09:14:55.952196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.682 [2024-11-20 09:14:55.952204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.682 [2024-11-20 09:14:55.952210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.682 [2024-11-20 09:14:55.952225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.682 qpair failed and we were unable to recover it. 00:29:30.682 [2024-11-20 09:14:55.961968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.682 [2024-11-20 09:14:55.962025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.682 [2024-11-20 09:14:55.962040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.682 [2024-11-20 09:14:55.962053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.682 [2024-11-20 09:14:55.962061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.682 [2024-11-20 09:14:55.962076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.682 qpair failed and we were unable to recover it. 00:29:30.682 [2024-11-20 09:14:55.972115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:55.972201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:55.972215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:55.972223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:55.972230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:55.972244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:55.982112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:55.982162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:55.982176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:55.982183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:55.982190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:55.982204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:55.992182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:55.992241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:55.992254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:55.992261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:55.992268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:55.992281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:56.002215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:56.002272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:56.002286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:56.002293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:56.002300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:56.002317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:56.012240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:56.012295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:56.012308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:56.012315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:56.012322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:56.012336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:56.022209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:56.022264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:56.022277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:56.022284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:56.022291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:56.022304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:56.032261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:56.032315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:56.032329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:56.032337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:56.032343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:56.032357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:56.042309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:56.042371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:56.042385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:56.042392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:56.042399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:56.042412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:56.052353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:56.052443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:56.052456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:56.052464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:56.052470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:56.052484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:56.062357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:56.062407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:56.062420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:56.062428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:56.062435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:56.062448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:56.072398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:56.072468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:56.072481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:56.072488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:56.072495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:56.072510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:56.082422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:56.082478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:56.082492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:56.082499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:56.082506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:56.082520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:56.092468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:56.092525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.683 [2024-11-20 09:14:56.092539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.683 [2024-11-20 09:14:56.092550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.683 [2024-11-20 09:14:56.092556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.683 [2024-11-20 09:14:56.092570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.683 qpair failed and we were unable to recover it. 00:29:30.683 [2024-11-20 09:14:56.102424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.683 [2024-11-20 09:14:56.102476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.684 [2024-11-20 09:14:56.102490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.684 [2024-11-20 09:14:56.102498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.684 [2024-11-20 09:14:56.102504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.684 [2024-11-20 09:14:56.102518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.684 qpair failed and we were unable to recover it. 00:29:30.684 [2024-11-20 09:14:56.112513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.684 [2024-11-20 09:14:56.112567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.684 [2024-11-20 09:14:56.112580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.684 [2024-11-20 09:14:56.112587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.684 [2024-11-20 09:14:56.112594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.684 [2024-11-20 09:14:56.112607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.684 qpair failed and we were unable to recover it. 00:29:30.684 [2024-11-20 09:14:56.122554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.684 [2024-11-20 09:14:56.122614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.684 [2024-11-20 09:14:56.122628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.684 [2024-11-20 09:14:56.122635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.684 [2024-11-20 09:14:56.122642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.684 [2024-11-20 09:14:56.122656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.684 qpair failed and we were unable to recover it. 00:29:30.684 [2024-11-20 09:14:56.132561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.684 [2024-11-20 09:14:56.132619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.684 [2024-11-20 09:14:56.132633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.684 [2024-11-20 09:14:56.132640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.684 [2024-11-20 09:14:56.132647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.684 [2024-11-20 09:14:56.132665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.684 qpair failed and we were unable to recover it. 00:29:30.684 [2024-11-20 09:14:56.142565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.684 [2024-11-20 09:14:56.142621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.684 [2024-11-20 09:14:56.142634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.684 [2024-11-20 09:14:56.142642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.684 [2024-11-20 09:14:56.142649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.684 [2024-11-20 09:14:56.142662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.684 qpair failed and we were unable to recover it. 00:29:30.684 [2024-11-20 09:14:56.152625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.684 [2024-11-20 09:14:56.152677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.684 [2024-11-20 09:14:56.152690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.684 [2024-11-20 09:14:56.152698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.684 [2024-11-20 09:14:56.152705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.684 [2024-11-20 09:14:56.152719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.684 qpair failed and we were unable to recover it. 00:29:30.684 [2024-11-20 09:14:56.162669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.684 [2024-11-20 09:14:56.162722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.684 [2024-11-20 09:14:56.162735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.684 [2024-11-20 09:14:56.162743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.684 [2024-11-20 09:14:56.162749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.684 [2024-11-20 09:14:56.162763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.684 qpair failed and we were unable to recover it. 00:29:30.684 [2024-11-20 09:14:56.172703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.684 [2024-11-20 09:14:56.172756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.684 [2024-11-20 09:14:56.172770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.684 [2024-11-20 09:14:56.172777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.684 [2024-11-20 09:14:56.172784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.684 [2024-11-20 09:14:56.172798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.684 qpair failed and we were unable to recover it. 00:29:30.684 [2024-11-20 09:14:56.182703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.684 [2024-11-20 09:14:56.182758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.684 [2024-11-20 09:14:56.182771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.684 [2024-11-20 09:14:56.182779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.684 [2024-11-20 09:14:56.182786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.684 [2024-11-20 09:14:56.182799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.684 qpair failed and we were unable to recover it. 00:29:30.684 [2024-11-20 09:14:56.192725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.684 [2024-11-20 09:14:56.192780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.684 [2024-11-20 09:14:56.192794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.684 [2024-11-20 09:14:56.192801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.684 [2024-11-20 09:14:56.192808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.684 [2024-11-20 09:14:56.192822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.684 qpair failed and we were unable to recover it. 00:29:30.684 [2024-11-20 09:14:56.202788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.684 [2024-11-20 09:14:56.202846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.684 [2024-11-20 09:14:56.202859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.684 [2024-11-20 09:14:56.202867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.684 [2024-11-20 09:14:56.202873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.684 [2024-11-20 09:14:56.202887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.684 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-20 09:14:56.212815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.946 [2024-11-20 09:14:56.212903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.946 [2024-11-20 09:14:56.212917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.946 [2024-11-20 09:14:56.212925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.946 [2024-11-20 09:14:56.212932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.946 [2024-11-20 09:14:56.212945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-20 09:14:56.222786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.946 [2024-11-20 09:14:56.222841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.946 [2024-11-20 09:14:56.222867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.946 [2024-11-20 09:14:56.222880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.946 [2024-11-20 09:14:56.222887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.946 [2024-11-20 09:14:56.222907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-20 09:14:56.232851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.946 [2024-11-20 09:14:56.232907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.946 [2024-11-20 09:14:56.232932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.946 [2024-11-20 09:14:56.232941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.946 [2024-11-20 09:14:56.232948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.232968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.242899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.242970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.242996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.243005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.243013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.243032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.252930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.253027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.253043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.253050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.253057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.253073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.262910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.262955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.262970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.262977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.262984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.263003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.272917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.272971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.272985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.272993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.272999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.273014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.282976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.283031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.283044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.283052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.283059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.283073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.293032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.293087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.293100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.293108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.293114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.293128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.302991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.303038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.303053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.303060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.303067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.303081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.313065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.313121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.313135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.313142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.313149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.313167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.323081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.323145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.323162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.323170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.323177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.323191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.333143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.333204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.333218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.333226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.333232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.333246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.343112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.343161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.343175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.343182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.343189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.343203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.353195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.353270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.353283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.353295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.353302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.353317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-20 09:14:56.363228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.947 [2024-11-20 09:14:56.363305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.947 [2024-11-20 09:14:56.363318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.947 [2024-11-20 09:14:56.363326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.947 [2024-11-20 09:14:56.363333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.947 [2024-11-20 09:14:56.363347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-20 09:14:56.373259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.948 [2024-11-20 09:14:56.373318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.948 [2024-11-20 09:14:56.373332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.948 [2024-11-20 09:14:56.373340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.948 [2024-11-20 09:14:56.373346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.948 [2024-11-20 09:14:56.373360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-20 09:14:56.383232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.948 [2024-11-20 09:14:56.383282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.948 [2024-11-20 09:14:56.383296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.948 [2024-11-20 09:14:56.383303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.948 [2024-11-20 09:14:56.383310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.948 [2024-11-20 09:14:56.383324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-20 09:14:56.393254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.948 [2024-11-20 09:14:56.393307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.948 [2024-11-20 09:14:56.393321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.948 [2024-11-20 09:14:56.393328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.948 [2024-11-20 09:14:56.393335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.948 [2024-11-20 09:14:56.393352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-20 09:14:56.403322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.948 [2024-11-20 09:14:56.403377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.948 [2024-11-20 09:14:56.403391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.948 [2024-11-20 09:14:56.403398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.948 [2024-11-20 09:14:56.403405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.948 [2024-11-20 09:14:56.403419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-20 09:14:56.413304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.948 [2024-11-20 09:14:56.413364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.948 [2024-11-20 09:14:56.413377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.948 [2024-11-20 09:14:56.413385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.948 [2024-11-20 09:14:56.413392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.948 [2024-11-20 09:14:56.413406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-20 09:14:56.423331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.948 [2024-11-20 09:14:56.423384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.948 [2024-11-20 09:14:56.423398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.948 [2024-11-20 09:14:56.423405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.948 [2024-11-20 09:14:56.423412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.948 [2024-11-20 09:14:56.423426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-20 09:14:56.433408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.948 [2024-11-20 09:14:56.433465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.948 [2024-11-20 09:14:56.433478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.948 [2024-11-20 09:14:56.433485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.948 [2024-11-20 09:14:56.433492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.948 [2024-11-20 09:14:56.433506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-20 09:14:56.443402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.948 [2024-11-20 09:14:56.443461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.948 [2024-11-20 09:14:56.443475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.948 [2024-11-20 09:14:56.443482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.948 [2024-11-20 09:14:56.443488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.948 [2024-11-20 09:14:56.443502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-20 09:14:56.453448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.948 [2024-11-20 09:14:56.453501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.948 [2024-11-20 09:14:56.453514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.948 [2024-11-20 09:14:56.453522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.948 [2024-11-20 09:14:56.453528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.948 [2024-11-20 09:14:56.453542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-20 09:14:56.463470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.948 [2024-11-20 09:14:56.463535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.948 [2024-11-20 09:14:56.463549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.948 [2024-11-20 09:14:56.463556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.948 [2024-11-20 09:14:56.463562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:30.948 [2024-11-20 09:14:56.463576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.948 qpair failed and we were unable to recover it. 00:29:31.210 [2024-11-20 09:14:56.473506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.210 [2024-11-20 09:14:56.473561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.210 [2024-11-20 09:14:56.473575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.210 [2024-11-20 09:14:56.473583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.210 [2024-11-20 09:14:56.473589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.210 [2024-11-20 09:14:56.473603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.210 qpair failed and we were unable to recover it. 00:29:31.210 [2024-11-20 09:14:56.483535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.210 [2024-11-20 09:14:56.483592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.210 [2024-11-20 09:14:56.483605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.210 [2024-11-20 09:14:56.483616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.210 [2024-11-20 09:14:56.483623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.210 [2024-11-20 09:14:56.483637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.210 qpair failed and we were unable to recover it. 00:29:31.210 [2024-11-20 09:14:56.493572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.210 [2024-11-20 09:14:56.493627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.210 [2024-11-20 09:14:56.493640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.210 [2024-11-20 09:14:56.493648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.210 [2024-11-20 09:14:56.493654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.210 [2024-11-20 09:14:56.493668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.210 qpair failed and we were unable to recover it. 00:29:31.210 [2024-11-20 09:14:56.503563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.210 [2024-11-20 09:14:56.503611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.210 [2024-11-20 09:14:56.503625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.210 [2024-11-20 09:14:56.503633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.210 [2024-11-20 09:14:56.503640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.210 [2024-11-20 09:14:56.503654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.210 qpair failed and we were unable to recover it. 00:29:31.210 [2024-11-20 09:14:56.513599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.210 [2024-11-20 09:14:56.513651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.210 [2024-11-20 09:14:56.513665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.210 [2024-11-20 09:14:56.513672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.210 [2024-11-20 09:14:56.513678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.210 [2024-11-20 09:14:56.513692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.210 qpair failed and we were unable to recover it. 00:29:31.210 [2024-11-20 09:14:56.523521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.523579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.523594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.523602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.523608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.523627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.533672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.533725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.533739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.533746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.533753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.533767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.543676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.543771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.543785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.543792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.543800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.543814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.553688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.553741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.553754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.553761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.553768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.553782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.563755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.563858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.563872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.563880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.563887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.563900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.573795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.573896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.573910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.573917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.573924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.573938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.583769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.583819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.583832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.583839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.583846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.583859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.593846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.593894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.593907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.593914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.593921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.593935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.603867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.603951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.603965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.603972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.603980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.603993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.613922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.613974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.613987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.613998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.614005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.614019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.623891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.623942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.623957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.623965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.623972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.623986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.633951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.211 [2024-11-20 09:14:56.634002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.211 [2024-11-20 09:14:56.634016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.211 [2024-11-20 09:14:56.634023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.211 [2024-11-20 09:14:56.634031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.211 [2024-11-20 09:14:56.634045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.211 qpair failed and we were unable to recover it. 00:29:31.211 [2024-11-20 09:14:56.643951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.212 [2024-11-20 09:14:56.644010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.212 [2024-11-20 09:14:56.644024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.212 [2024-11-20 09:14:56.644031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.212 [2024-11-20 09:14:56.644037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.212 [2024-11-20 09:14:56.644051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.212 qpair failed and we were unable to recover it. 00:29:31.212 [2024-11-20 09:14:56.654025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.212 [2024-11-20 09:14:56.654080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.212 [2024-11-20 09:14:56.654094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.212 [2024-11-20 09:14:56.654101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.212 [2024-11-20 09:14:56.654108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.212 [2024-11-20 09:14:56.654129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.212 qpair failed and we were unable to recover it. 00:29:31.212 [2024-11-20 09:14:56.663882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.212 [2024-11-20 09:14:56.663950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.212 [2024-11-20 09:14:56.663964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.212 [2024-11-20 09:14:56.663971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.212 [2024-11-20 09:14:56.663978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.212 [2024-11-20 09:14:56.663991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.212 qpair failed and we were unable to recover it. 00:29:31.212 [2024-11-20 09:14:56.674109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.212 [2024-11-20 09:14:56.674183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.212 [2024-11-20 09:14:56.674198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.212 [2024-11-20 09:14:56.674205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.212 [2024-11-20 09:14:56.674211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.212 [2024-11-20 09:14:56.674226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.212 qpair failed and we were unable to recover it. 00:29:31.212 [2024-11-20 09:14:56.684107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.212 [2024-11-20 09:14:56.684167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.212 [2024-11-20 09:14:56.684182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.212 [2024-11-20 09:14:56.684189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.212 [2024-11-20 09:14:56.684195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.212 [2024-11-20 09:14:56.684210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.212 qpair failed and we were unable to recover it. 00:29:31.212 [2024-11-20 09:14:56.694139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.212 [2024-11-20 09:14:56.694195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.212 [2024-11-20 09:14:56.694208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.212 [2024-11-20 09:14:56.694215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.212 [2024-11-20 09:14:56.694222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.212 [2024-11-20 09:14:56.694236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.212 qpair failed and we were unable to recover it. 00:29:31.212 [2024-11-20 09:14:56.704081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.212 [2024-11-20 09:14:56.704130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.212 [2024-11-20 09:14:56.704144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.212 [2024-11-20 09:14:56.704151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.212 [2024-11-20 09:14:56.704162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.212 [2024-11-20 09:14:56.704177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.212 qpair failed and we were unable to recover it. 00:29:31.212 [2024-11-20 09:14:56.714197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.212 [2024-11-20 09:14:56.714245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.212 [2024-11-20 09:14:56.714259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.212 [2024-11-20 09:14:56.714266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.212 [2024-11-20 09:14:56.714272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.212 [2024-11-20 09:14:56.714287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.212 qpair failed and we were unable to recover it. 00:29:31.212 [2024-11-20 09:14:56.724213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.212 [2024-11-20 09:14:56.724269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.212 [2024-11-20 09:14:56.724283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.212 [2024-11-20 09:14:56.724291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.212 [2024-11-20 09:14:56.724297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.212 [2024-11-20 09:14:56.724311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.212 qpair failed and we were unable to recover it. 00:29:31.212 [2024-11-20 09:14:56.734248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.212 [2024-11-20 09:14:56.734299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.212 [2024-11-20 09:14:56.734313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.212 [2024-11-20 09:14:56.734320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.212 [2024-11-20 09:14:56.734327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.212 [2024-11-20 09:14:56.734341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.212 qpair failed and we were unable to recover it. 00:29:31.474 [2024-11-20 09:14:56.744245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.474 [2024-11-20 09:14:56.744296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.474 [2024-11-20 09:14:56.744311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.474 [2024-11-20 09:14:56.744322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.474 [2024-11-20 09:14:56.744330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.474 [2024-11-20 09:14:56.744349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.474 qpair failed and we were unable to recover it. 00:29:31.474 [2024-11-20 09:14:56.754276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.474 [2024-11-20 09:14:56.754333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.474 [2024-11-20 09:14:56.754347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.474 [2024-11-20 09:14:56.754354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.474 [2024-11-20 09:14:56.754361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.474 [2024-11-20 09:14:56.754375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.474 qpair failed and we were unable to recover it. 00:29:31.474 [2024-11-20 09:14:56.764341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.474 [2024-11-20 09:14:56.764426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.474 [2024-11-20 09:14:56.764439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.474 [2024-11-20 09:14:56.764446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.474 [2024-11-20 09:14:56.764453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.474 [2024-11-20 09:14:56.764467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.474 qpair failed and we were unable to recover it. 00:29:31.474 [2024-11-20 09:14:56.774387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.474 [2024-11-20 09:14:56.774476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.474 [2024-11-20 09:14:56.774489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.474 [2024-11-20 09:14:56.774497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.474 [2024-11-20 09:14:56.774504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.474 [2024-11-20 09:14:56.774518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.474 qpair failed and we were unable to recover it. 00:29:31.474 [2024-11-20 09:14:56.784335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.474 [2024-11-20 09:14:56.784385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.474 [2024-11-20 09:14:56.784398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.474 [2024-11-20 09:14:56.784405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.474 [2024-11-20 09:14:56.784412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.474 [2024-11-20 09:14:56.784429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.474 qpair failed and we were unable to recover it. 00:29:31.474 [2024-11-20 09:14:56.794313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.474 [2024-11-20 09:14:56.794365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.474 [2024-11-20 09:14:56.794378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.474 [2024-11-20 09:14:56.794385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.794392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.475 [2024-11-20 09:14:56.794405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.475 qpair failed and we were unable to recover it. 00:29:31.475 [2024-11-20 09:14:56.804456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.475 [2024-11-20 09:14:56.804511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.475 [2024-11-20 09:14:56.804525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.475 [2024-11-20 09:14:56.804532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.804539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.475 [2024-11-20 09:14:56.804552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.475 qpair failed and we were unable to recover it. 00:29:31.475 [2024-11-20 09:14:56.814468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.475 [2024-11-20 09:14:56.814525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.475 [2024-11-20 09:14:56.814537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.475 [2024-11-20 09:14:56.814545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.814552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.475 [2024-11-20 09:14:56.814565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.475 qpair failed and we were unable to recover it. 00:29:31.475 [2024-11-20 09:14:56.824453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.475 [2024-11-20 09:14:56.824506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.475 [2024-11-20 09:14:56.824519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.475 [2024-11-20 09:14:56.824526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.824533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.475 [2024-11-20 09:14:56.824547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.475 qpair failed and we were unable to recover it. 00:29:31.475 [2024-11-20 09:14:56.834519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.475 [2024-11-20 09:14:56.834577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.475 [2024-11-20 09:14:56.834591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.475 [2024-11-20 09:14:56.834599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.834605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.475 [2024-11-20 09:14:56.834619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.475 qpair failed and we were unable to recover it. 00:29:31.475 [2024-11-20 09:14:56.844548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.475 [2024-11-20 09:14:56.844602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.475 [2024-11-20 09:14:56.844615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.475 [2024-11-20 09:14:56.844622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.844628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.475 [2024-11-20 09:14:56.844642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.475 qpair failed and we were unable to recover it. 00:29:31.475 [2024-11-20 09:14:56.854456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.475 [2024-11-20 09:14:56.854523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.475 [2024-11-20 09:14:56.854536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.475 [2024-11-20 09:14:56.854544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.854550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.475 [2024-11-20 09:14:56.854564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.475 qpair failed and we were unable to recover it. 00:29:31.475 [2024-11-20 09:14:56.864555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.475 [2024-11-20 09:14:56.864605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.475 [2024-11-20 09:14:56.864618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.475 [2024-11-20 09:14:56.864625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.864632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.475 [2024-11-20 09:14:56.864645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.475 qpair failed and we were unable to recover it. 00:29:31.475 [2024-11-20 09:14:56.874602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.475 [2024-11-20 09:14:56.874654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.475 [2024-11-20 09:14:56.874668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.475 [2024-11-20 09:14:56.874678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.874685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.475 [2024-11-20 09:14:56.874699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.475 qpair failed and we were unable to recover it. 00:29:31.475 [2024-11-20 09:14:56.884648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.475 [2024-11-20 09:14:56.884754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.475 [2024-11-20 09:14:56.884770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.475 [2024-11-20 09:14:56.884778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.884785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.475 [2024-11-20 09:14:56.884802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.475 qpair failed and we were unable to recover it. 00:29:31.475 [2024-11-20 09:14:56.894699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.475 [2024-11-20 09:14:56.894758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.475 [2024-11-20 09:14:56.894773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.475 [2024-11-20 09:14:56.894780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.894787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.475 [2024-11-20 09:14:56.894801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.475 qpair failed and we were unable to recover it. 00:29:31.475 [2024-11-20 09:14:56.904642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.475 [2024-11-20 09:14:56.904694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.475 [2024-11-20 09:14:56.904708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.475 [2024-11-20 09:14:56.904715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.475 [2024-11-20 09:14:56.904722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.476 [2024-11-20 09:14:56.904735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.476 qpair failed and we were unable to recover it. 00:29:31.476 [2024-11-20 09:14:56.914720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.476 [2024-11-20 09:14:56.914768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.476 [2024-11-20 09:14:56.914781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.476 [2024-11-20 09:14:56.914788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.476 [2024-11-20 09:14:56.914795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.476 [2024-11-20 09:14:56.914812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.476 qpair failed and we were unable to recover it. 00:29:31.476 [2024-11-20 09:14:56.924749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.476 [2024-11-20 09:14:56.924805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.476 [2024-11-20 09:14:56.924819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.476 [2024-11-20 09:14:56.924826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.476 [2024-11-20 09:14:56.924833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.476 [2024-11-20 09:14:56.924847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.476 qpair failed and we were unable to recover it. 00:29:31.476 [2024-11-20 09:14:56.934784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.476 [2024-11-20 09:14:56.934840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.476 [2024-11-20 09:14:56.934853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.476 [2024-11-20 09:14:56.934860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.476 [2024-11-20 09:14:56.934866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.476 [2024-11-20 09:14:56.934880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.476 qpair failed and we were unable to recover it. 00:29:31.476 [2024-11-20 09:14:56.944763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.476 [2024-11-20 09:14:56.944810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.476 [2024-11-20 09:14:56.944824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.476 [2024-11-20 09:14:56.944831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.476 [2024-11-20 09:14:56.944837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.476 [2024-11-20 09:14:56.944851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.476 qpair failed and we were unable to recover it. 00:29:31.476 [2024-11-20 09:14:56.954813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.476 [2024-11-20 09:14:56.954864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.476 [2024-11-20 09:14:56.954877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.476 [2024-11-20 09:14:56.954885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.476 [2024-11-20 09:14:56.954891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.476 [2024-11-20 09:14:56.954905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.476 qpair failed and we were unable to recover it. 00:29:31.476 [2024-11-20 09:14:56.964841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.476 [2024-11-20 09:14:56.964897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.476 [2024-11-20 09:14:56.964911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.476 [2024-11-20 09:14:56.964918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.476 [2024-11-20 09:14:56.964924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.476 [2024-11-20 09:14:56.964938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.476 qpair failed and we were unable to recover it. 00:29:31.476 [2024-11-20 09:14:56.974909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.476 [2024-11-20 09:14:56.975003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.476 [2024-11-20 09:14:56.975018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.476 [2024-11-20 09:14:56.975026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.476 [2024-11-20 09:14:56.975032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.476 [2024-11-20 09:14:56.975046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.476 qpair failed and we were unable to recover it. 00:29:31.476 [2024-11-20 09:14:56.984853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.476 [2024-11-20 09:14:56.984899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.476 [2024-11-20 09:14:56.984912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.476 [2024-11-20 09:14:56.984920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.476 [2024-11-20 09:14:56.984927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.476 [2024-11-20 09:14:56.984940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.476 qpair failed and we were unable to recover it. 00:29:31.476 [2024-11-20 09:14:56.994917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.476 [2024-11-20 09:14:56.994986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.476 [2024-11-20 09:14:56.995000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.476 [2024-11-20 09:14:56.995007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.476 [2024-11-20 09:14:56.995014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.476 [2024-11-20 09:14:56.995028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.476 qpair failed and we were unable to recover it. 00:29:31.740 [2024-11-20 09:14:57.004975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.740 [2024-11-20 09:14:57.005031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.740 [2024-11-20 09:14:57.005045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.740 [2024-11-20 09:14:57.005057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.740 [2024-11-20 09:14:57.005064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.740 [2024-11-20 09:14:57.005078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-11-20 09:14:57.015001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.740 [2024-11-20 09:14:57.015054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.740 [2024-11-20 09:14:57.015067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.740 [2024-11-20 09:14:57.015075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.740 [2024-11-20 09:14:57.015081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.740 [2024-11-20 09:14:57.015095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-11-20 09:14:57.025003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.740 [2024-11-20 09:14:57.025055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.740 [2024-11-20 09:14:57.025069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.740 [2024-11-20 09:14:57.025076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.740 [2024-11-20 09:14:57.025082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.740 [2024-11-20 09:14:57.025096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-11-20 09:14:57.035093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.740 [2024-11-20 09:14:57.035169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.740 [2024-11-20 09:14:57.035182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.740 [2024-11-20 09:14:57.035190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.740 [2024-11-20 09:14:57.035196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.740 [2024-11-20 09:14:57.035211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-11-20 09:14:57.045056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.740 [2024-11-20 09:14:57.045107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.740 [2024-11-20 09:14:57.045121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.740 [2024-11-20 09:14:57.045129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.740 [2024-11-20 09:14:57.045135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.740 [2024-11-20 09:14:57.045153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-11-20 09:14:57.055089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.740 [2024-11-20 09:14:57.055145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.740 [2024-11-20 09:14:57.055162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.740 [2024-11-20 09:14:57.055170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.740 [2024-11-20 09:14:57.055176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.740 [2024-11-20 09:14:57.055191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-11-20 09:14:57.065143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.740 [2024-11-20 09:14:57.065228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.740 [2024-11-20 09:14:57.065241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.740 [2024-11-20 09:14:57.065249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.740 [2024-11-20 09:14:57.065256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.740 [2024-11-20 09:14:57.065270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-11-20 09:14:57.075178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.740 [2024-11-20 09:14:57.075233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.740 [2024-11-20 09:14:57.075247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.740 [2024-11-20 09:14:57.075254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.740 [2024-11-20 09:14:57.075260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.740 [2024-11-20 09:14:57.075274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-11-20 09:14:57.085195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.740 [2024-11-20 09:14:57.085261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.740 [2024-11-20 09:14:57.085275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.740 [2024-11-20 09:14:57.085282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.740 [2024-11-20 09:14:57.085288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.740 [2024-11-20 09:14:57.085302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-11-20 09:14:57.095220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.740 [2024-11-20 09:14:57.095282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.740 [2024-11-20 09:14:57.095295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.740 [2024-11-20 09:14:57.095302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.740 [2024-11-20 09:14:57.095309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.740 [2024-11-20 09:14:57.095322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.105135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.105185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.105200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.741 [2024-11-20 09:14:57.105207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.741 [2024-11-20 09:14:57.105213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.741 [2024-11-20 09:14:57.105227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.115248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.115305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.115318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.741 [2024-11-20 09:14:57.115325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.741 [2024-11-20 09:14:57.115332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.741 [2024-11-20 09:14:57.115346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.125339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.125415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.125429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.741 [2024-11-20 09:14:57.125438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.741 [2024-11-20 09:14:57.125444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.741 [2024-11-20 09:14:57.125458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.135349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.135403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.135417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.741 [2024-11-20 09:14:57.135427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.741 [2024-11-20 09:14:57.135434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.741 [2024-11-20 09:14:57.135447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.145314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.145370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.145383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.741 [2024-11-20 09:14:57.145390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.741 [2024-11-20 09:14:57.145397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.741 [2024-11-20 09:14:57.145410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.155403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.155455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.155468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.741 [2024-11-20 09:14:57.155475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.741 [2024-11-20 09:14:57.155481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.741 [2024-11-20 09:14:57.155495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.165404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.165478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.165491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.741 [2024-11-20 09:14:57.165499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.741 [2024-11-20 09:14:57.165505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.741 [2024-11-20 09:14:57.165519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.175486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.175583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.175598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.741 [2024-11-20 09:14:57.175605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.741 [2024-11-20 09:14:57.175612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.741 [2024-11-20 09:14:57.175632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.185443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.185506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.185520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.741 [2024-11-20 09:14:57.185527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.741 [2024-11-20 09:14:57.185533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.741 [2024-11-20 09:14:57.185547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.195501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.195556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.195569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.741 [2024-11-20 09:14:57.195576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.741 [2024-11-20 09:14:57.195583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.741 [2024-11-20 09:14:57.195596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.205536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.205589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.205602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.741 [2024-11-20 09:14:57.205610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.741 [2024-11-20 09:14:57.205616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.741 [2024-11-20 09:14:57.205630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-11-20 09:14:57.215557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.741 [2024-11-20 09:14:57.215609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.741 [2024-11-20 09:14:57.215622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.742 [2024-11-20 09:14:57.215630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.742 [2024-11-20 09:14:57.215636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.742 [2024-11-20 09:14:57.215650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-11-20 09:14:57.225539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.742 [2024-11-20 09:14:57.225589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.742 [2024-11-20 09:14:57.225603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.742 [2024-11-20 09:14:57.225610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.742 [2024-11-20 09:14:57.225617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.742 [2024-11-20 09:14:57.225630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-11-20 09:14:57.235617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.742 [2024-11-20 09:14:57.235665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.742 [2024-11-20 09:14:57.235678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.742 [2024-11-20 09:14:57.235686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.742 [2024-11-20 09:14:57.235693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.742 [2024-11-20 09:14:57.235706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-11-20 09:14:57.245649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.742 [2024-11-20 09:14:57.245704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.742 [2024-11-20 09:14:57.245718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.742 [2024-11-20 09:14:57.245725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.742 [2024-11-20 09:14:57.245731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.742 [2024-11-20 09:14:57.245745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-11-20 09:14:57.255692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.742 [2024-11-20 09:14:57.255759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.742 [2024-11-20 09:14:57.255772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.742 [2024-11-20 09:14:57.255779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.742 [2024-11-20 09:14:57.255786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:31.742 [2024-11-20 09:14:57.255799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.742 qpair failed and we were unable to recover it. 00:29:32.003 [2024-11-20 09:14:57.265535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.003 [2024-11-20 09:14:57.265637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.003 [2024-11-20 09:14:57.265651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.003 [2024-11-20 09:14:57.265662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.003 [2024-11-20 09:14:57.265669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.003 [2024-11-20 09:14:57.265682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.003 qpair failed and we were unable to recover it. 00:29:32.003 [2024-11-20 09:14:57.275722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.003 [2024-11-20 09:14:57.275779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.003 [2024-11-20 09:14:57.275793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.003 [2024-11-20 09:14:57.275799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.275806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.275819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.285750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.285805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.285818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.285825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.285832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.285845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.295805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.295859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.295872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.295879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.295885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.295899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.305645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.305691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.305705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.305712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.305719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.305736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.315849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.315906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.315920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.315927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.315933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.315946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.325924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.325984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.326009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.326017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.326024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.326043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.335856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.335911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.335926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.335933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.335940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.335955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.345874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.345929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.345943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.345950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.345956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.345970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.355960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.356022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.356048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.356056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.356063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.356082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.365988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.366045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.366060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.366067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.366074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.366089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.376019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.376074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.376088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.376095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.376101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.376115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.385999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.386045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.386059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.386066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.386072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.386086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.396075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.396129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.396142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.396154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.396173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.396188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.406078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.406135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.004 [2024-11-20 09:14:57.406149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.004 [2024-11-20 09:14:57.406156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.004 [2024-11-20 09:14:57.406168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.004 [2024-11-20 09:14:57.406182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.004 qpair failed and we were unable to recover it. 00:29:32.004 [2024-11-20 09:14:57.416121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.004 [2024-11-20 09:14:57.416176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.416190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.416197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.416204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.416217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.005 [2024-11-20 09:14:57.426105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.005 [2024-11-20 09:14:57.426184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.426198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.426205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.426211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.426225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.005 [2024-11-20 09:14:57.436141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.005 [2024-11-20 09:14:57.436195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.436208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.436215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.436222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.436239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.005 [2024-11-20 09:14:57.446211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.005 [2024-11-20 09:14:57.446274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.446288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.446295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.446301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.446315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.005 [2024-11-20 09:14:57.456249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.005 [2024-11-20 09:14:57.456307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.456321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.456328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.456334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.456348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.005 [2024-11-20 09:14:57.466222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.005 [2024-11-20 09:14:57.466269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.466282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.466290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.466296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.466310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.005 [2024-11-20 09:14:57.476156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.005 [2024-11-20 09:14:57.476217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.476230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.476237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.476244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.476257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.005 [2024-11-20 09:14:57.486369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.005 [2024-11-20 09:14:57.486429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.486443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.486450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.486456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.486469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.005 [2024-11-20 09:14:57.496366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.005 [2024-11-20 09:14:57.496446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.496460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.496467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.496473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.496486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.005 [2024-11-20 09:14:57.506354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.005 [2024-11-20 09:14:57.506399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.506413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.506420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.506426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.506439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.005 [2024-11-20 09:14:57.516397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.005 [2024-11-20 09:14:57.516450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.516463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.516470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.516476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.516489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.005 [2024-11-20 09:14:57.526452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.005 [2024-11-20 09:14:57.526509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.005 [2024-11-20 09:14:57.526522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.005 [2024-11-20 09:14:57.526533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.005 [2024-11-20 09:14:57.526539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.005 [2024-11-20 09:14:57.526553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.005 qpair failed and we were unable to recover it. 00:29:32.267 [2024-11-20 09:14:57.536480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.267 [2024-11-20 09:14:57.536551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.267 [2024-11-20 09:14:57.536564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.267 [2024-11-20 09:14:57.536571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.267 [2024-11-20 09:14:57.536577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.267 [2024-11-20 09:14:57.536591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.267 qpair failed and we were unable to recover it. 00:29:32.267 [2024-11-20 09:14:57.546363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.267 [2024-11-20 09:14:57.546413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.267 [2024-11-20 09:14:57.546426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.267 [2024-11-20 09:14:57.546433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.267 [2024-11-20 09:14:57.546439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.267 [2024-11-20 09:14:57.546453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.267 qpair failed and we were unable to recover it. 00:29:32.267 [2024-11-20 09:14:57.556523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.267 [2024-11-20 09:14:57.556573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.267 [2024-11-20 09:14:57.556586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.267 [2024-11-20 09:14:57.556593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.267 [2024-11-20 09:14:57.556599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.267 [2024-11-20 09:14:57.556612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.267 qpair failed and we were unable to recover it. 00:29:32.267 [2024-11-20 09:14:57.566567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.267 [2024-11-20 09:14:57.566626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.267 [2024-11-20 09:14:57.566639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.267 [2024-11-20 09:14:57.566646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.267 [2024-11-20 09:14:57.566652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.267 [2024-11-20 09:14:57.566669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.267 qpair failed and we were unable to recover it. 00:29:32.267 [2024-11-20 09:14:57.576638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.267 [2024-11-20 09:14:57.576714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.267 [2024-11-20 09:14:57.576726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.267 [2024-11-20 09:14:57.576733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.267 [2024-11-20 09:14:57.576739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.267 [2024-11-20 09:14:57.576753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.267 qpair failed and we were unable to recover it. 00:29:32.267 [2024-11-20 09:14:57.586558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.267 [2024-11-20 09:14:57.586605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.267 [2024-11-20 09:14:57.586618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.267 [2024-11-20 09:14:57.586625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.267 [2024-11-20 09:14:57.586631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.267 [2024-11-20 09:14:57.586644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.267 qpair failed and we were unable to recover it. 00:29:32.267 [2024-11-20 09:14:57.596603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.267 [2024-11-20 09:14:57.596652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.267 [2024-11-20 09:14:57.596665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.596672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.596679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.596692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.268 [2024-11-20 09:14:57.606627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.268 [2024-11-20 09:14:57.606699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.268 [2024-11-20 09:14:57.606712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.606720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.606726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.606739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.268 [2024-11-20 09:14:57.616662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.268 [2024-11-20 09:14:57.616729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.268 [2024-11-20 09:14:57.616743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.616750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.616756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.616770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.268 [2024-11-20 09:14:57.626636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.268 [2024-11-20 09:14:57.626692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.268 [2024-11-20 09:14:57.626708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.626715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.626722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.626737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.268 [2024-11-20 09:14:57.636633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.268 [2024-11-20 09:14:57.636687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.268 [2024-11-20 09:14:57.636701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.636708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.636714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.636728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.268 [2024-11-20 09:14:57.646689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.268 [2024-11-20 09:14:57.646759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.268 [2024-11-20 09:14:57.646772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.646780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.646786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.646800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.268 [2024-11-20 09:14:57.656802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.268 [2024-11-20 09:14:57.656899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.268 [2024-11-20 09:14:57.656912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.656927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.656933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.656948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.268 [2024-11-20 09:14:57.666810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.268 [2024-11-20 09:14:57.666854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.268 [2024-11-20 09:14:57.666867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.666874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.666880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.666894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.268 [2024-11-20 09:14:57.676865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.268 [2024-11-20 09:14:57.676917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.268 [2024-11-20 09:14:57.676931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.676938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.676944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.676958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.268 [2024-11-20 09:14:57.686892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.268 [2024-11-20 09:14:57.686950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.268 [2024-11-20 09:14:57.686963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.686970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.686977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.686990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.268 [2024-11-20 09:14:57.696913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.268 [2024-11-20 09:14:57.696969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.268 [2024-11-20 09:14:57.696984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.696991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.696997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.697014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.268 [2024-11-20 09:14:57.706895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.268 [2024-11-20 09:14:57.706940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.268 [2024-11-20 09:14:57.706954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.268 [2024-11-20 09:14:57.706962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.268 [2024-11-20 09:14:57.706968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.268 [2024-11-20 09:14:57.706981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.268 qpair failed and we were unable to recover it. 00:29:32.269 [2024-11-20 09:14:57.716919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.269 [2024-11-20 09:14:57.717014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.269 [2024-11-20 09:14:57.717027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.269 [2024-11-20 09:14:57.717034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.269 [2024-11-20 09:14:57.717040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.269 [2024-11-20 09:14:57.717054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.269 qpair failed and we were unable to recover it. 00:29:32.269 [2024-11-20 09:14:57.727046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.269 [2024-11-20 09:14:57.727099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.269 [2024-11-20 09:14:57.727113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.269 [2024-11-20 09:14:57.727120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.269 [2024-11-20 09:14:57.727126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.269 [2024-11-20 09:14:57.727140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.269 qpair failed and we were unable to recover it. 00:29:32.269 [2024-11-20 09:14:57.737044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.269 [2024-11-20 09:14:57.737095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.269 [2024-11-20 09:14:57.737109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.269 [2024-11-20 09:14:57.737116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.269 [2024-11-20 09:14:57.737122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.269 [2024-11-20 09:14:57.737136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.269 qpair failed and we were unable to recover it. 00:29:32.269 [2024-11-20 09:14:57.747008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.269 [2024-11-20 09:14:57.747096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.269 [2024-11-20 09:14:57.747109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.269 [2024-11-20 09:14:57.747117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.269 [2024-11-20 09:14:57.747123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.269 [2024-11-20 09:14:57.747137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.269 qpair failed and we were unable to recover it. 00:29:32.269 [2024-11-20 09:14:57.757089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.269 [2024-11-20 09:14:57.757144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.269 [2024-11-20 09:14:57.757157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.269 [2024-11-20 09:14:57.757168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.269 [2024-11-20 09:14:57.757174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.269 [2024-11-20 09:14:57.757188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.269 qpair failed and we were unable to recover it. 00:29:32.269 [2024-11-20 09:14:57.767004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.269 [2024-11-20 09:14:57.767061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.269 [2024-11-20 09:14:57.767075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.269 [2024-11-20 09:14:57.767082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.269 [2024-11-20 09:14:57.767088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.269 [2024-11-20 09:14:57.767101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.269 qpair failed and we were unable to recover it. 00:29:32.269 [2024-11-20 09:14:57.777127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.269 [2024-11-20 09:14:57.777186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.269 [2024-11-20 09:14:57.777208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.269 [2024-11-20 09:14:57.777215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.269 [2024-11-20 09:14:57.777221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.269 [2024-11-20 09:14:57.777235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.269 qpair failed and we were unable to recover it. 00:29:32.269 [2024-11-20 09:14:57.787131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.269 [2024-11-20 09:14:57.787182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.269 [2024-11-20 09:14:57.787195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.269 [2024-11-20 09:14:57.787206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.269 [2024-11-20 09:14:57.787212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.269 [2024-11-20 09:14:57.787226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.269 qpair failed and we were unable to recover it. 00:29:32.531 [2024-11-20 09:14:57.797202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.531 [2024-11-20 09:14:57.797252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.531 [2024-11-20 09:14:57.797266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.531 [2024-11-20 09:14:57.797273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.531 [2024-11-20 09:14:57.797279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.531 [2024-11-20 09:14:57.797293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.531 qpair failed and we were unable to recover it. 00:29:32.531 [2024-11-20 09:14:57.807243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.531 [2024-11-20 09:14:57.807300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.531 [2024-11-20 09:14:57.807314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.531 [2024-11-20 09:14:57.807321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.531 [2024-11-20 09:14:57.807327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.531 [2024-11-20 09:14:57.807341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.531 qpair failed and we were unable to recover it. 00:29:32.531 [2024-11-20 09:14:57.817301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.531 [2024-11-20 09:14:57.817369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.531 [2024-11-20 09:14:57.817382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.531 [2024-11-20 09:14:57.817389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.531 [2024-11-20 09:14:57.817396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.531 [2024-11-20 09:14:57.817409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.531 qpair failed and we were unable to recover it. 00:29:32.531 [2024-11-20 09:14:57.827239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.531 [2024-11-20 09:14:57.827325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.531 [2024-11-20 09:14:57.827338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.531 [2024-11-20 09:14:57.827345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.531 [2024-11-20 09:14:57.827351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.531 [2024-11-20 09:14:57.827368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.531 qpair failed and we were unable to recover it. 00:29:32.531 [2024-11-20 09:14:57.837271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.531 [2024-11-20 09:14:57.837326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.531 [2024-11-20 09:14:57.837339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.531 [2024-11-20 09:14:57.837346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.531 [2024-11-20 09:14:57.837352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.531 [2024-11-20 09:14:57.837366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.531 qpair failed and we were unable to recover it. 00:29:32.531 [2024-11-20 09:14:57.847457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.531 [2024-11-20 09:14:57.847537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.531 [2024-11-20 09:14:57.847550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.531 [2024-11-20 09:14:57.847557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.531 [2024-11-20 09:14:57.847563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.531 [2024-11-20 09:14:57.847576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.531 qpair failed and we were unable to recover it. 00:29:32.531 [2024-11-20 09:14:57.857410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.531 [2024-11-20 09:14:57.857463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.531 [2024-11-20 09:14:57.857477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.531 [2024-11-20 09:14:57.857483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.531 [2024-11-20 09:14:57.857489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.531 [2024-11-20 09:14:57.857503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.531 qpair failed and we were unable to recover it. 00:29:32.531 [2024-11-20 09:14:57.867361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.531 [2024-11-20 09:14:57.867408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.531 [2024-11-20 09:14:57.867421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.531 [2024-11-20 09:14:57.867428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.531 [2024-11-20 09:14:57.867434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.531 [2024-11-20 09:14:57.867448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.531 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.877472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.877524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.877538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.877545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.877551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.877565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.887495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.887548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.887561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.887567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.887574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.887587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.897482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.897536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.897550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.897557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.897563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.897577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.907485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.907533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.907546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.907553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.907559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.907572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.917546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.917621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.917634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.917645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.917651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.917665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.927580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.927636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.927649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.927656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.927662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.927675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.937603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.937659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.937673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.937679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.937686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.937699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.947581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.947630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.947643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.947650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.947656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.947669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.957609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.957665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.957678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.957685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.957692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.957709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.967672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.967725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.967738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.967745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.967752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.967765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.977697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.977756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.977769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.977776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.977783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.977796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.987696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.987743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.987756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.987763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.987769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.987783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:57.997690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.532 [2024-11-20 09:14:57.997745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.532 [2024-11-20 09:14:57.997758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.532 [2024-11-20 09:14:57.997765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.532 [2024-11-20 09:14:57.997771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.532 [2024-11-20 09:14:57.997785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.532 qpair failed and we were unable to recover it. 00:29:32.532 [2024-11-20 09:14:58.007787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.533 [2024-11-20 09:14:58.007847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.533 [2024-11-20 09:14:58.007860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.533 [2024-11-20 09:14:58.007867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.533 [2024-11-20 09:14:58.007873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.533 [2024-11-20 09:14:58.007887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.533 qpair failed and we were unable to recover it. 00:29:32.533 [2024-11-20 09:14:58.017796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.533 [2024-11-20 09:14:58.017844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.533 [2024-11-20 09:14:58.017858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.533 [2024-11-20 09:14:58.017865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.533 [2024-11-20 09:14:58.017871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.533 [2024-11-20 09:14:58.017884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.533 qpair failed and we were unable to recover it. 00:29:32.533 [2024-11-20 09:14:58.027780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.533 [2024-11-20 09:14:58.027849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.533 [2024-11-20 09:14:58.027862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.533 [2024-11-20 09:14:58.027869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.533 [2024-11-20 09:14:58.027876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.533 [2024-11-20 09:14:58.027890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.533 qpair failed and we were unable to recover it. 00:29:32.533 [2024-11-20 09:14:58.037831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.533 [2024-11-20 09:14:58.037879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.533 [2024-11-20 09:14:58.037896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.533 [2024-11-20 09:14:58.037903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.533 [2024-11-20 09:14:58.037909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.533 [2024-11-20 09:14:58.037924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.533 qpair failed and we were unable to recover it. 00:29:32.533 [2024-11-20 09:14:58.047838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.533 [2024-11-20 09:14:58.047888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.533 [2024-11-20 09:14:58.047913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.533 [2024-11-20 09:14:58.047926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.533 [2024-11-20 09:14:58.047933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.533 [2024-11-20 09:14:58.047952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.533 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.057893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.057954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.795 [2024-11-20 09:14:58.057979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.795 [2024-11-20 09:14:58.057988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.795 [2024-11-20 09:14:58.057996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.795 [2024-11-20 09:14:58.058015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.795 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.067871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.067919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.795 [2024-11-20 09:14:58.067934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.795 [2024-11-20 09:14:58.067942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.795 [2024-11-20 09:14:58.067948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.795 [2024-11-20 09:14:58.067963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.795 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.077905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.077957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.795 [2024-11-20 09:14:58.077970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.795 [2024-11-20 09:14:58.077977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.795 [2024-11-20 09:14:58.077984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.795 [2024-11-20 09:14:58.077998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.795 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.087972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.088042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.795 [2024-11-20 09:14:58.088055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.795 [2024-11-20 09:14:58.088062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.795 [2024-11-20 09:14:58.088069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.795 [2024-11-20 09:14:58.088087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.795 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.097898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.097951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.795 [2024-11-20 09:14:58.097966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.795 [2024-11-20 09:14:58.097973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.795 [2024-11-20 09:14:58.097980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.795 [2024-11-20 09:14:58.097994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.795 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.107881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.107935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.795 [2024-11-20 09:14:58.107949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.795 [2024-11-20 09:14:58.107956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.795 [2024-11-20 09:14:58.107962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.795 [2024-11-20 09:14:58.107976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.795 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.118063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.118107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.795 [2024-11-20 09:14:58.118121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.795 [2024-11-20 09:14:58.118128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.795 [2024-11-20 09:14:58.118134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.795 [2024-11-20 09:14:58.118148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.795 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.127918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.127979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.795 [2024-11-20 09:14:58.127992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.795 [2024-11-20 09:14:58.128000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.795 [2024-11-20 09:14:58.128006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.795 [2024-11-20 09:14:58.128021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.795 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.138124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.138188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.795 [2024-11-20 09:14:58.138202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.795 [2024-11-20 09:14:58.138209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.795 [2024-11-20 09:14:58.138215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.795 [2024-11-20 09:14:58.138229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.795 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.148094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.148139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.795 [2024-11-20 09:14:58.148152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.795 [2024-11-20 09:14:58.148163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.795 [2024-11-20 09:14:58.148170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.795 [2024-11-20 09:14:58.148184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.795 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.158176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.158230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.795 [2024-11-20 09:14:58.158243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.795 [2024-11-20 09:14:58.158250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.795 [2024-11-20 09:14:58.158257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.795 [2024-11-20 09:14:58.158270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.795 qpair failed and we were unable to recover it. 00:29:32.795 [2024-11-20 09:14:58.168148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.795 [2024-11-20 09:14:58.168245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.168259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.168266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.168273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.168286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.178229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.178280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.178293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.178304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.178310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.178324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.188214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.188273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.188286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.188293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.188300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.188314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.198267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.198313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.198326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.198333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.198339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.198353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.208243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.208338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.208352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.208359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.208365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.208379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.218352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.218406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.218419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.218426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.218433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.218450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.228320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.228366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.228379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.228386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.228393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.228406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.238377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.238421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.238434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.238441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.238448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.238461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.248358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.248438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.248452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.248458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.248464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.248478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.258454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.258537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.258550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.258557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.258563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.258577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.268413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.268467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.268481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.268488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.268494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.268508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.278346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.278392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.278405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.278412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.278418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.278432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.288504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.288574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.288587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.796 [2024-11-20 09:14:58.288595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.796 [2024-11-20 09:14:58.288602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.796 [2024-11-20 09:14:58.288615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.796 qpair failed and we were unable to recover it. 00:29:32.796 [2024-11-20 09:14:58.298625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.796 [2024-11-20 09:14:58.298707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.796 [2024-11-20 09:14:58.298721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.797 [2024-11-20 09:14:58.298728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.797 [2024-11-20 09:14:58.298734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.797 [2024-11-20 09:14:58.298748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.797 qpair failed and we were unable to recover it. 00:29:32.797 [2024-11-20 09:14:58.308507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.797 [2024-11-20 09:14:58.308601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.797 [2024-11-20 09:14:58.308614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.797 [2024-11-20 09:14:58.308625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.797 [2024-11-20 09:14:58.308631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.797 [2024-11-20 09:14:58.308645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.797 qpair failed and we were unable to recover it. 00:29:32.797 [2024-11-20 09:14:58.318618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.797 [2024-11-20 09:14:58.318700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.797 [2024-11-20 09:14:58.318713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.797 [2024-11-20 09:14:58.318720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.797 [2024-11-20 09:14:58.318726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:32.797 [2024-11-20 09:14:58.318740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.797 qpair failed and we were unable to recover it. 00:29:33.058 [2024-11-20 09:14:58.328583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.058 [2024-11-20 09:14:58.328634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.058 [2024-11-20 09:14:58.328648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.058 [2024-11-20 09:14:58.328655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.058 [2024-11-20 09:14:58.328661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.058 [2024-11-20 09:14:58.328674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.058 qpair failed and we were unable to recover it. 00:29:33.058 [2024-11-20 09:14:58.338640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.058 [2024-11-20 09:14:58.338690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.058 [2024-11-20 09:14:58.338703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.058 [2024-11-20 09:14:58.338710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.058 [2024-11-20 09:14:58.338716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.058 [2024-11-20 09:14:58.338729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.058 qpair failed and we were unable to recover it. 00:29:33.058 [2024-11-20 09:14:58.348646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.058 [2024-11-20 09:14:58.348689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.058 [2024-11-20 09:14:58.348702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.058 [2024-11-20 09:14:58.348709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.058 [2024-11-20 09:14:58.348716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.058 [2024-11-20 09:14:58.348733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.058 qpair failed and we were unable to recover it. 00:29:33.058 [2024-11-20 09:14:58.358575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.358628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.358641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.358648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.358654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.358668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.368683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.368731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.368744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.368751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.368757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.368771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.378719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.378770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.378783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.378790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.378797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.378810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.388603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.388653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.388666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.388674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.388680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.388693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.398830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.398879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.398893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.398900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.398906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.398920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.408765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.408858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.408871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.408878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.408885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.408898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.418812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.418868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.418893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.418902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.418909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.418928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.428835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.428889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.428914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.428922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.428929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.428949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.438897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.438947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.438962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.438974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.438981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.438996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.448892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.448943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.448968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.448977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.448984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.449003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.458941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.459033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.459057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.459066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.459073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.459092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.468951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.468998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.469013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.469020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.469027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.469041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.478989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.059 [2024-11-20 09:14:58.479034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.059 [2024-11-20 09:14:58.479048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.059 [2024-11-20 09:14:58.479055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.059 [2024-11-20 09:14:58.479061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.059 [2024-11-20 09:14:58.479079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.059 qpair failed and we were unable to recover it. 00:29:33.059 [2024-11-20 09:14:58.489005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.060 [2024-11-20 09:14:58.489050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.060 [2024-11-20 09:14:58.489064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.060 [2024-11-20 09:14:58.489071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.060 [2024-11-20 09:14:58.489078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.060 [2024-11-20 09:14:58.489091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.060 qpair failed and we were unable to recover it. 00:29:33.060 [2024-11-20 09:14:58.499044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.060 [2024-11-20 09:14:58.499090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.060 [2024-11-20 09:14:58.499104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.060 [2024-11-20 09:14:58.499111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.060 [2024-11-20 09:14:58.499117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.060 [2024-11-20 09:14:58.499131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.060 qpair failed and we were unable to recover it. 00:29:33.060 [2024-11-20 09:14:58.509054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.060 [2024-11-20 09:14:58.509138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.060 [2024-11-20 09:14:58.509151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.060 [2024-11-20 09:14:58.509161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.060 [2024-11-20 09:14:58.509168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.060 [2024-11-20 09:14:58.509182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.060 qpair failed and we were unable to recover it. 00:29:33.060 [2024-11-20 09:14:58.519134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.060 [2024-11-20 09:14:58.519182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.060 [2024-11-20 09:14:58.519197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.060 [2024-11-20 09:14:58.519204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.060 [2024-11-20 09:14:58.519210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.060 [2024-11-20 09:14:58.519225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.060 qpair failed and we were unable to recover it. 00:29:33.060 [2024-11-20 09:14:58.529117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.060 [2024-11-20 09:14:58.529170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.060 [2024-11-20 09:14:58.529183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.060 [2024-11-20 09:14:58.529191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.060 [2024-11-20 09:14:58.529197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.060 [2024-11-20 09:14:58.529211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.060 qpair failed and we were unable to recover it. 00:29:33.060 [2024-11-20 09:14:58.539201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.060 [2024-11-20 09:14:58.539251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.060 [2024-11-20 09:14:58.539265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.060 [2024-11-20 09:14:58.539272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.060 [2024-11-20 09:14:58.539278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.060 [2024-11-20 09:14:58.539292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.060 qpair failed and we were unable to recover it. 00:29:33.060 [2024-11-20 09:14:58.549185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.060 [2024-11-20 09:14:58.549229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.060 [2024-11-20 09:14:58.549242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.060 [2024-11-20 09:14:58.549249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.060 [2024-11-20 09:14:58.549255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.060 [2024-11-20 09:14:58.549269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.060 qpair failed and we were unable to recover it. 00:29:33.060 [2024-11-20 09:14:58.559249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.060 [2024-11-20 09:14:58.559353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.060 [2024-11-20 09:14:58.559366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.060 [2024-11-20 09:14:58.559372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.060 [2024-11-20 09:14:58.559379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.060 [2024-11-20 09:14:58.559393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.060 qpair failed and we were unable to recover it. 00:29:33.060 [2024-11-20 09:14:58.569220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.060 [2024-11-20 09:14:58.569266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.060 [2024-11-20 09:14:58.569279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.060 [2024-11-20 09:14:58.569290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.060 [2024-11-20 09:14:58.569296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.060 [2024-11-20 09:14:58.569310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.060 qpair failed and we were unable to recover it. 00:29:33.060 [2024-11-20 09:14:58.579302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.060 [2024-11-20 09:14:58.579350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.060 [2024-11-20 09:14:58.579363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.060 [2024-11-20 09:14:58.579370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.060 [2024-11-20 09:14:58.579377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.060 [2024-11-20 09:14:58.579391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.060 qpair failed and we were unable to recover it. 00:29:33.322 [2024-11-20 09:14:58.589209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.322 [2024-11-20 09:14:58.589253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.322 [2024-11-20 09:14:58.589266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.322 [2024-11-20 09:14:58.589273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.322 [2024-11-20 09:14:58.589279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.322 [2024-11-20 09:14:58.589293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.322 qpair failed and we were unable to recover it. 00:29:33.322 [2024-11-20 09:14:58.599353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.322 [2024-11-20 09:14:58.599402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.322 [2024-11-20 09:14:58.599416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.322 [2024-11-20 09:14:58.599423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.322 [2024-11-20 09:14:58.599429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.322 [2024-11-20 09:14:58.599443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.322 qpair failed and we were unable to recover it. 00:29:33.322 [2024-11-20 09:14:58.609388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.322 [2024-11-20 09:14:58.609437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.322 [2024-11-20 09:14:58.609451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.322 [2024-11-20 09:14:58.609458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.322 [2024-11-20 09:14:58.609464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.322 [2024-11-20 09:14:58.609485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.322 qpair failed and we were unable to recover it. 00:29:33.322 [2024-11-20 09:14:58.619425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.322 [2024-11-20 09:14:58.619480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.322 [2024-11-20 09:14:58.619493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.322 [2024-11-20 09:14:58.619500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.322 [2024-11-20 09:14:58.619506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.322 [2024-11-20 09:14:58.619519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.322 qpair failed and we were unable to recover it. 00:29:33.322 [2024-11-20 09:14:58.629300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.322 [2024-11-20 09:14:58.629344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.322 [2024-11-20 09:14:58.629360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.322 [2024-11-20 09:14:58.629367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.322 [2024-11-20 09:14:58.629374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.322 [2024-11-20 09:14:58.629389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.322 qpair failed and we were unable to recover it. 00:29:33.322 [2024-11-20 09:14:58.639488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.322 [2024-11-20 09:14:58.639574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.322 [2024-11-20 09:14:58.639588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.322 [2024-11-20 09:14:58.639595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.322 [2024-11-20 09:14:58.639601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.322 [2024-11-20 09:14:58.639615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.322 qpair failed and we were unable to recover it. 00:29:33.322 [2024-11-20 09:14:58.649437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.322 [2024-11-20 09:14:58.649533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.322 [2024-11-20 09:14:58.649547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.322 [2024-11-20 09:14:58.649554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.322 [2024-11-20 09:14:58.649560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.322 [2024-11-20 09:14:58.649574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.322 qpair failed and we were unable to recover it. 00:29:33.322 [2024-11-20 09:14:58.659504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.322 [2024-11-20 09:14:58.659554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.322 [2024-11-20 09:14:58.659567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.322 [2024-11-20 09:14:58.659574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.322 [2024-11-20 09:14:58.659580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.322 [2024-11-20 09:14:58.659594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.322 qpair failed and we were unable to recover it. 00:29:33.322 [2024-11-20 09:14:58.669506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.322 [2024-11-20 09:14:58.669548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.322 [2024-11-20 09:14:58.669561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.322 [2024-11-20 09:14:58.669568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.322 [2024-11-20 09:14:58.669574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.322 [2024-11-20 09:14:58.669588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.679543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.679589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.323 [2024-11-20 09:14:58.679602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.323 [2024-11-20 09:14:58.679609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.323 [2024-11-20 09:14:58.679615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.323 [2024-11-20 09:14:58.679628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.689522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.689568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.323 [2024-11-20 09:14:58.689581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.323 [2024-11-20 09:14:58.689588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.323 [2024-11-20 09:14:58.689594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.323 [2024-11-20 09:14:58.689608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.699588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.699640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.323 [2024-11-20 09:14:58.699654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.323 [2024-11-20 09:14:58.699664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.323 [2024-11-20 09:14:58.699671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.323 [2024-11-20 09:14:58.699684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.709605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.709651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.323 [2024-11-20 09:14:58.709664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.323 [2024-11-20 09:14:58.709671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.323 [2024-11-20 09:14:58.709677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.323 [2024-11-20 09:14:58.709690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.719672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.719719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.323 [2024-11-20 09:14:58.719732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.323 [2024-11-20 09:14:58.719739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.323 [2024-11-20 09:14:58.719745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.323 [2024-11-20 09:14:58.719758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.729655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.729702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.323 [2024-11-20 09:14:58.729715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.323 [2024-11-20 09:14:58.729722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.323 [2024-11-20 09:14:58.729728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.323 [2024-11-20 09:14:58.729742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.739699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.739749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.323 [2024-11-20 09:14:58.739762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.323 [2024-11-20 09:14:58.739769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.323 [2024-11-20 09:14:58.739775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.323 [2024-11-20 09:14:58.739792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.749641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.749729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.323 [2024-11-20 09:14:58.749743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.323 [2024-11-20 09:14:58.749750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.323 [2024-11-20 09:14:58.749756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.323 [2024-11-20 09:14:58.749770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.759756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.759807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.323 [2024-11-20 09:14:58.759820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.323 [2024-11-20 09:14:58.759827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.323 [2024-11-20 09:14:58.759833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.323 [2024-11-20 09:14:58.759847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.769771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.769818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.323 [2024-11-20 09:14:58.769831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.323 [2024-11-20 09:14:58.769838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.323 [2024-11-20 09:14:58.769844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.323 [2024-11-20 09:14:58.769857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.779840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.779890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.323 [2024-11-20 09:14:58.779904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.323 [2024-11-20 09:14:58.779910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.323 [2024-11-20 09:14:58.779917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.323 [2024-11-20 09:14:58.779930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.323 qpair failed and we were unable to recover it. 00:29:33.323 [2024-11-20 09:14:58.789837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.323 [2024-11-20 09:14:58.789941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.324 [2024-11-20 09:14:58.789967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.324 [2024-11-20 09:14:58.789975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.324 [2024-11-20 09:14:58.789982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.324 [2024-11-20 09:14:58.790001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.324 qpair failed and we were unable to recover it. 00:29:33.324 [2024-11-20 09:14:58.799853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.324 [2024-11-20 09:14:58.799903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.324 [2024-11-20 09:14:58.799928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.324 [2024-11-20 09:14:58.799936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.324 [2024-11-20 09:14:58.799944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.324 [2024-11-20 09:14:58.799963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.324 qpair failed and we were unable to recover it. 00:29:33.324 [2024-11-20 09:14:58.809862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.324 [2024-11-20 09:14:58.809909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.324 [2024-11-20 09:14:58.809928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.324 [2024-11-20 09:14:58.809935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.324 [2024-11-20 09:14:58.809942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.324 [2024-11-20 09:14:58.809958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.324 qpair failed and we were unable to recover it. 00:29:33.324 [2024-11-20 09:14:58.819944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.324 [2024-11-20 09:14:58.819995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.324 [2024-11-20 09:14:58.820008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.324 [2024-11-20 09:14:58.820015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.324 [2024-11-20 09:14:58.820022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.324 [2024-11-20 09:14:58.820036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.324 qpair failed and we were unable to recover it. 00:29:33.324 [2024-11-20 09:14:58.829895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.324 [2024-11-20 09:14:58.829987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.324 [2024-11-20 09:14:58.830000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.324 [2024-11-20 09:14:58.830011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.324 [2024-11-20 09:14:58.830018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.324 [2024-11-20 09:14:58.830032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.324 qpair failed and we were unable to recover it. 00:29:33.324 [2024-11-20 09:14:58.839962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.324 [2024-11-20 09:14:58.840012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.324 [2024-11-20 09:14:58.840025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.324 [2024-11-20 09:14:58.840032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.324 [2024-11-20 09:14:58.840039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.324 [2024-11-20 09:14:58.840052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.324 qpair failed and we were unable to recover it. 00:29:33.585 [2024-11-20 09:14:58.849979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.585 [2024-11-20 09:14:58.850024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.585 [2024-11-20 09:14:58.850037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.585 [2024-11-20 09:14:58.850044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.585 [2024-11-20 09:14:58.850051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.850064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.860030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.860099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.860112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.860119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.860126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.860140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.870035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.870077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.870090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.870098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.870104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.870121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.880097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.880144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.880157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.880169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.880175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.880189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.890086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.890131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.890144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.890151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.890162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.890176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.900174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.900226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.900240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.900247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.900253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.900267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.910147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.910240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.910254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.910261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.910268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.910281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.920212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.920264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.920277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.920284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.920290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.920304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.930128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.930179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.930192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.930199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.930205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.930219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.940294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.940347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.940360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.940367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.940373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.940387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.950229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.950318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.950333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.950340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.950346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.950360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.960323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.960373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.960386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.960396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.960403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.960416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.970301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.970347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.970361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.970368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.970374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.586 [2024-11-20 09:14:58.970388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.586 qpair failed and we were unable to recover it. 00:29:33.586 [2024-11-20 09:14:58.980338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.586 [2024-11-20 09:14:58.980391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.586 [2024-11-20 09:14:58.980404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.586 [2024-11-20 09:14:58.980411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.586 [2024-11-20 09:14:58.980417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:58.980431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:58.990368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:58.990451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:58.990465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:58.990472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:58.990478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:58.990495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.000415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:59.000471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:59.000485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:59.000492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:59.000499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:59.000516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.010391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:59.010464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:59.010477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:59.010485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:59.010491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:59.010504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.020487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:59.020537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:59.020550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:59.020557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:59.020563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:59.020577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.030482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:59.030560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:59.030573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:59.030580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:59.030586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:59.030599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.040524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:59.040610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:59.040623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:59.040630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:59.040636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:59.040650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.050532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:59.050579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:59.050592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:59.050599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:59.050605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:59.050619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.060517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:59.060612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:59.060625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:59.060632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:59.060638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:59.060652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.070548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:59.070598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:59.070611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:59.070618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:59.070624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:59.070638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.080612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:59.080659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:59.080672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:59.080679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:59.080686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:59.080699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.090622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:59.090671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:59.090684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:59.090694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:59.090701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:59.090714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.100695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.587 [2024-11-20 09:14:59.100750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.587 [2024-11-20 09:14:59.100764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.587 [2024-11-20 09:14:59.100771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.587 [2024-11-20 09:14:59.100777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.587 [2024-11-20 09:14:59.100790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.587 qpair failed and we were unable to recover it. 00:29:33.587 [2024-11-20 09:14:59.110688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.855 [2024-11-20 09:14:59.110738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.856 [2024-11-20 09:14:59.110751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.856 [2024-11-20 09:14:59.110759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.856 [2024-11-20 09:14:59.110767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.856 [2024-11-20 09:14:59.110781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.856 qpair failed and we were unable to recover it. 00:29:33.856 [2024-11-20 09:14:59.120724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.856 [2024-11-20 09:14:59.120775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.856 [2024-11-20 09:14:59.120788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.856 [2024-11-20 09:14:59.120795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.856 [2024-11-20 09:14:59.120802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.856 [2024-11-20 09:14:59.120815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.856 qpair failed and we were unable to recover it. 00:29:33.856 [2024-11-20 09:14:59.130740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.856 [2024-11-20 09:14:59.130787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.856 [2024-11-20 09:14:59.130801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.856 [2024-11-20 09:14:59.130807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.856 [2024-11-20 09:14:59.130813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.856 [2024-11-20 09:14:59.130832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.856 qpair failed and we were unable to recover it. 00:29:33.856 [2024-11-20 09:14:59.140808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.856 [2024-11-20 09:14:59.140864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.856 [2024-11-20 09:14:59.140877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.856 [2024-11-20 09:14:59.140884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.856 [2024-11-20 09:14:59.140891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.856 [2024-11-20 09:14:59.140904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.856 qpair failed and we were unable to recover it. 00:29:33.856 [2024-11-20 09:14:59.150790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.856 [2024-11-20 09:14:59.150839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.856 [2024-11-20 09:14:59.150865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.856 [2024-11-20 09:14:59.150873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.856 [2024-11-20 09:14:59.150881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.856 [2024-11-20 09:14:59.150900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.856 qpair failed and we were unable to recover it. 00:29:33.857 [2024-11-20 09:14:59.160808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.857 [2024-11-20 09:14:59.160866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.857 [2024-11-20 09:14:59.160892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.857 [2024-11-20 09:14:59.160901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.857 [2024-11-20 09:14:59.160908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.857 [2024-11-20 09:14:59.160927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.857 qpair failed and we were unable to recover it. 00:29:33.857 [2024-11-20 09:14:59.170850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.857 [2024-11-20 09:14:59.170895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.857 [2024-11-20 09:14:59.170910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.857 [2024-11-20 09:14:59.170917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.857 [2024-11-20 09:14:59.170924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.857 [2024-11-20 09:14:59.170939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.857 qpair failed and we were unable to recover it. 00:29:33.857 [2024-11-20 09:14:59.180926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.857 [2024-11-20 09:14:59.180998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.857 [2024-11-20 09:14:59.181012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.857 [2024-11-20 09:14:59.181019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.857 [2024-11-20 09:14:59.181026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.857 [2024-11-20 09:14:59.181040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.857 qpair failed and we were unable to recover it. 00:29:33.857 [2024-11-20 09:14:59.190909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.857 [2024-11-20 09:14:59.190953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.857 [2024-11-20 09:14:59.190967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.857 [2024-11-20 09:14:59.190974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.857 [2024-11-20 09:14:59.190980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.857 [2024-11-20 09:14:59.190994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.857 qpair failed and we were unable to recover it. 00:29:33.857 [2024-11-20 09:14:59.200891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.857 [2024-11-20 09:14:59.200938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.857 [2024-11-20 09:14:59.200952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.857 [2024-11-20 09:14:59.200959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.857 [2024-11-20 09:14:59.200966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.857 [2024-11-20 09:14:59.200980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.857 qpair failed and we were unable to recover it. 00:29:33.857 [2024-11-20 09:14:59.210961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.857 [2024-11-20 09:14:59.211007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.857 [2024-11-20 09:14:59.211020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.857 [2024-11-20 09:14:59.211027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.857 [2024-11-20 09:14:59.211034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.857 [2024-11-20 09:14:59.211049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.857 qpair failed and we were unable to recover it. 00:29:33.857 [2024-11-20 09:14:59.221021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.857 [2024-11-20 09:14:59.221069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.857 [2024-11-20 09:14:59.221083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.857 [2024-11-20 09:14:59.221094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.858 [2024-11-20 09:14:59.221100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.858 [2024-11-20 09:14:59.221114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.858 qpair failed and we were unable to recover it. 00:29:33.858 [2024-11-20 09:14:59.230879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.858 [2024-11-20 09:14:59.230925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.858 [2024-11-20 09:14:59.230938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.858 [2024-11-20 09:14:59.230945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.858 [2024-11-20 09:14:59.230952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.858 [2024-11-20 09:14:59.230966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.858 qpair failed and we were unable to recover it. 00:29:33.858 [2024-11-20 09:14:59.241071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.858 [2024-11-20 09:14:59.241122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.858 [2024-11-20 09:14:59.241135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.858 [2024-11-20 09:14:59.241143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.858 [2024-11-20 09:14:59.241149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.858 [2024-11-20 09:14:59.241166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.858 qpair failed and we were unable to recover it. 00:29:33.858 [2024-11-20 09:14:59.251026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.858 [2024-11-20 09:14:59.251072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.858 [2024-11-20 09:14:59.251086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.858 [2024-11-20 09:14:59.251093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.858 [2024-11-20 09:14:59.251099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.859 [2024-11-20 09:14:59.251113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-11-20 09:14:59.261097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.859 [2024-11-20 09:14:59.261193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.859 [2024-11-20 09:14:59.261207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.859 [2024-11-20 09:14:59.261214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.859 [2024-11-20 09:14:59.261221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.859 [2024-11-20 09:14:59.261238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-11-20 09:14:59.271074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.859 [2024-11-20 09:14:59.271120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.859 [2024-11-20 09:14:59.271134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.859 [2024-11-20 09:14:59.271140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.859 [2024-11-20 09:14:59.271147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.859 [2024-11-20 09:14:59.271163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-11-20 09:14:59.281175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.859 [2024-11-20 09:14:59.281223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.859 [2024-11-20 09:14:59.281241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.859 [2024-11-20 09:14:59.281248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.859 [2024-11-20 09:14:59.281254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.859 [2024-11-20 09:14:59.281269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-11-20 09:14:59.291179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.859 [2024-11-20 09:14:59.291226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.859 [2024-11-20 09:14:59.291240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.859 [2024-11-20 09:14:59.291247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.859 [2024-11-20 09:14:59.291253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.859 [2024-11-20 09:14:59.291267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.859 qpair failed and we were unable to recover it. 00:29:33.859 [2024-11-20 09:14:59.302031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.860 [2024-11-20 09:14:59.302084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.860 [2024-11-20 09:14:59.302098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.860 [2024-11-20 09:14:59.302105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.860 [2024-11-20 09:14:59.302111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.860 [2024-11-20 09:14:59.302125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-11-20 09:14:59.311222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.860 [2024-11-20 09:14:59.311271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.860 [2024-11-20 09:14:59.311285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.860 [2024-11-20 09:14:59.311292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.860 [2024-11-20 09:14:59.311298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.860 [2024-11-20 09:14:59.311312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-11-20 09:14:59.321263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.860 [2024-11-20 09:14:59.321330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.860 [2024-11-20 09:14:59.321343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.860 [2024-11-20 09:14:59.321350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.860 [2024-11-20 09:14:59.321356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.860 [2024-11-20 09:14:59.321371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-11-20 09:14:59.331266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.860 [2024-11-20 09:14:59.331310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.860 [2024-11-20 09:14:59.331323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.860 [2024-11-20 09:14:59.331330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.860 [2024-11-20 09:14:59.331337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.860 [2024-11-20 09:14:59.331350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-11-20 09:14:59.341249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.860 [2024-11-20 09:14:59.341316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.860 [2024-11-20 09:14:59.341329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.860 [2024-11-20 09:14:59.341336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.860 [2024-11-20 09:14:59.341342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.860 [2024-11-20 09:14:59.341356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-11-20 09:14:59.351334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.860 [2024-11-20 09:14:59.351378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.860 [2024-11-20 09:14:59.351391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.860 [2024-11-20 09:14:59.351402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.860 [2024-11-20 09:14:59.351408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.860 [2024-11-20 09:14:59.351422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.860 qpair failed and we were unable to recover it. 00:29:33.860 [2024-11-20 09:14:59.361398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.861 [2024-11-20 09:14:59.361444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.861 [2024-11-20 09:14:59.361458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.861 [2024-11-20 09:14:59.361465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.861 [2024-11-20 09:14:59.361471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.861 [2024-11-20 09:14:59.361484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.861 qpair failed and we were unable to recover it. 00:29:33.861 [2024-11-20 09:14:59.371407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.861 [2024-11-20 09:14:59.371456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.861 [2024-11-20 09:14:59.371469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.861 [2024-11-20 09:14:59.371476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.861 [2024-11-20 09:14:59.371482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:33.861 [2024-11-20 09:14:59.371496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.861 qpair failed and we were unable to recover it. 00:29:34.124 [2024-11-20 09:14:59.381429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.124 [2024-11-20 09:14:59.381477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.124 [2024-11-20 09:14:59.381490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.124 [2024-11-20 09:14:59.381497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.124 [2024-11-20 09:14:59.381504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.124 [2024-11-20 09:14:59.381517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.124 qpair failed and we were unable to recover it. 00:29:34.124 [2024-11-20 09:14:59.391535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.124 [2024-11-20 09:14:59.391587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.124 [2024-11-20 09:14:59.391600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.124 [2024-11-20 09:14:59.391608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.124 [2024-11-20 09:14:59.391614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.124 [2024-11-20 09:14:59.391631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.124 qpair failed and we were unable to recover it. 00:29:34.124 [2024-11-20 09:14:59.401536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.124 [2024-11-20 09:14:59.401586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.124 [2024-11-20 09:14:59.401600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.124 [2024-11-20 09:14:59.401607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.124 [2024-11-20 09:14:59.401613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.124 [2024-11-20 09:14:59.401629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.124 qpair failed and we were unable to recover it. 00:29:34.124 [2024-11-20 09:14:59.411492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.124 [2024-11-20 09:14:59.411552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.124 [2024-11-20 09:14:59.411565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.124 [2024-11-20 09:14:59.411572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.124 [2024-11-20 09:14:59.411578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.124 [2024-11-20 09:14:59.411591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.124 qpair failed and we were unable to recover it. 00:29:34.124 [2024-11-20 09:14:59.421535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.124 [2024-11-20 09:14:59.421579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.124 [2024-11-20 09:14:59.421592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.124 [2024-11-20 09:14:59.421599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.124 [2024-11-20 09:14:59.421605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.124 [2024-11-20 09:14:59.421618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.124 qpair failed and we were unable to recover it. 00:29:34.124 [2024-11-20 09:14:59.431514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.124 [2024-11-20 09:14:59.431558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.124 [2024-11-20 09:14:59.431571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.124 [2024-11-20 09:14:59.431579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.124 [2024-11-20 09:14:59.431585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.124 [2024-11-20 09:14:59.431599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.124 qpair failed and we were unable to recover it. 00:29:34.124 [2024-11-20 09:14:59.441623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.124 [2024-11-20 09:14:59.441675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.124 [2024-11-20 09:14:59.441688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.124 [2024-11-20 09:14:59.441695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.124 [2024-11-20 09:14:59.441702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.124 [2024-11-20 09:14:59.441715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.124 qpair failed and we were unable to recover it. 00:29:34.124 [2024-11-20 09:14:59.451590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.124 [2024-11-20 09:14:59.451638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.124 [2024-11-20 09:14:59.451652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.124 [2024-11-20 09:14:59.451659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.124 [2024-11-20 09:14:59.451665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.124 [2024-11-20 09:14:59.451678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.124 qpair failed and we were unable to recover it. 00:29:34.124 [2024-11-20 09:14:59.461523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.124 [2024-11-20 09:14:59.461615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.124 [2024-11-20 09:14:59.461630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.124 [2024-11-20 09:14:59.461638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.124 [2024-11-20 09:14:59.461644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.124 [2024-11-20 09:14:59.461662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.124 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.471536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.471597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.471612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.471619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.471625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.471638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.481725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.481771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.481784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.481795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.481801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.481816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.491721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.491769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.491782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.491789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.491795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.491809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.501767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.501814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.501828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.501835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.501841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.501855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.511744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.511787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.511801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.511808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.511814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.511828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.521800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.521878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.521892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.521899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.521905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.521923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.531860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.531911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.531925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.531932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.531938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.531952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.541829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.541874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.541888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.541895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.541901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.541914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.551884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.551928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.551942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.551949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.551955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.551968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.561935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.561985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.561999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.562006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.562012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.562026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.571927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.571979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.571992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.571999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.572006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.572019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.581964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.582012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.582026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.582033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.582039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.582052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.591946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.125 [2024-11-20 09:14:59.591996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.125 [2024-11-20 09:14:59.592009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.125 [2024-11-20 09:14:59.592016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.125 [2024-11-20 09:14:59.592022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.125 [2024-11-20 09:14:59.592036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.125 qpair failed and we were unable to recover it. 00:29:34.125 [2024-11-20 09:14:59.602047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.126 [2024-11-20 09:14:59.602089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.126 [2024-11-20 09:14:59.602103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.126 [2024-11-20 09:14:59.602110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.126 [2024-11-20 09:14:59.602116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.126 [2024-11-20 09:14:59.602129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.126 qpair failed and we were unable to recover it. 00:29:34.126 [2024-11-20 09:14:59.612027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.126 [2024-11-20 09:14:59.612078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.126 [2024-11-20 09:14:59.612091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.126 [2024-11-20 09:14:59.612105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.126 [2024-11-20 09:14:59.612112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.126 [2024-11-20 09:14:59.612126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.126 qpair failed and we were unable to recover it. 00:29:34.126 [2024-11-20 09:14:59.622065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.126 [2024-11-20 09:14:59.622111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.126 [2024-11-20 09:14:59.622125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.126 [2024-11-20 09:14:59.622132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.126 [2024-11-20 09:14:59.622139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.126 [2024-11-20 09:14:59.622153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.126 qpair failed and we were unable to recover it. 00:29:34.126 [2024-11-20 09:14:59.632101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.126 [2024-11-20 09:14:59.632180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.126 [2024-11-20 09:14:59.632196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.126 [2024-11-20 09:14:59.632204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.126 [2024-11-20 09:14:59.632211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.126 [2024-11-20 09:14:59.632226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.126 qpair failed and we were unable to recover it. 00:29:34.126 [2024-11-20 09:14:59.642125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.126 [2024-11-20 09:14:59.642177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.126 [2024-11-20 09:14:59.642191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.126 [2024-11-20 09:14:59.642198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.126 [2024-11-20 09:14:59.642204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.126 [2024-11-20 09:14:59.642218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.126 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.652111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.652165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.652178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.652186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.652192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.652206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.662167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.662215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.662229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.662236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.662242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.662256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.672097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.672146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.672163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.672171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.672177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.672191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.682247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.682295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.682310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.682317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.682323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.682338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.692239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.692285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.692299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.692306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.692312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.692326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.702250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.702300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.702314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.702321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.702327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.702341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.712265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.712315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.712329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.712335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.712342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.712356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.722359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.722403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.722417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.722424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.722430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.722444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.732416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.732480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.732493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.732500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.732506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.732519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.742265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.742314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.742327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.742338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.742345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.742359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.752321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.752370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.752383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.752390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.752396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.752410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.762454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.762505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.762518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.762525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.762531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.762545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.772473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.772521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.772535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.772542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.772548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.772561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.782485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.782529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.782542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.782549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.782555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.782569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.792554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.792618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.792632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.792639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.792645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.792661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.802573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.802618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.388 [2024-11-20 09:14:59.802632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.388 [2024-11-20 09:14:59.802639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.388 [2024-11-20 09:14:59.802646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.388 [2024-11-20 09:14:59.802660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.388 qpair failed and we were unable to recover it. 00:29:34.388 [2024-11-20 09:14:59.812558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.388 [2024-11-20 09:14:59.812609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.389 [2024-11-20 09:14:59.812622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.389 [2024-11-20 09:14:59.812629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.389 [2024-11-20 09:14:59.812636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.389 [2024-11-20 09:14:59.812649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.389 qpair failed and we were unable to recover it. 00:29:34.389 [2024-11-20 09:14:59.822593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.389 [2024-11-20 09:14:59.822639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.389 [2024-11-20 09:14:59.822653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.389 [2024-11-20 09:14:59.822660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.389 [2024-11-20 09:14:59.822666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.389 [2024-11-20 09:14:59.822680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.389 qpair failed and we were unable to recover it. 00:29:34.389 [2024-11-20 09:14:59.832620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.389 [2024-11-20 09:14:59.832665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.389 [2024-11-20 09:14:59.832679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.389 [2024-11-20 09:14:59.832686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.389 [2024-11-20 09:14:59.832692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.389 [2024-11-20 09:14:59.832706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.389 qpair failed and we were unable to recover it. 00:29:34.389 [2024-11-20 09:14:59.842702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.389 [2024-11-20 09:14:59.842784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.389 [2024-11-20 09:14:59.842798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.389 [2024-11-20 09:14:59.842805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.389 [2024-11-20 09:14:59.842811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.389 [2024-11-20 09:14:59.842825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.389 qpair failed and we were unable to recover it. 00:29:34.389 [2024-11-20 09:14:59.852679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.389 [2024-11-20 09:14:59.852727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.389 [2024-11-20 09:14:59.852741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.389 [2024-11-20 09:14:59.852748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.389 [2024-11-20 09:14:59.852754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.389 [2024-11-20 09:14:59.852768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.389 qpair failed and we were unable to recover it. 00:29:34.389 [2024-11-20 09:14:59.862583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.389 [2024-11-20 09:14:59.862629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.389 [2024-11-20 09:14:59.862643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.389 [2024-11-20 09:14:59.862650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.389 [2024-11-20 09:14:59.862656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.389 [2024-11-20 09:14:59.862670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.389 qpair failed and we were unable to recover it. 00:29:34.389 [2024-11-20 09:14:59.872693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.389 [2024-11-20 09:14:59.872733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.389 [2024-11-20 09:14:59.872747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.389 [2024-11-20 09:14:59.872758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.389 [2024-11-20 09:14:59.872764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.389 [2024-11-20 09:14:59.872778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.389 qpair failed and we were unable to recover it. 00:29:34.389 [2024-11-20 09:14:59.882701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.389 [2024-11-20 09:14:59.882748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.389 [2024-11-20 09:14:59.882762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.389 [2024-11-20 09:14:59.882769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.389 [2024-11-20 09:14:59.882775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.389 [2024-11-20 09:14:59.882789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.389 qpair failed and we were unable to recover it. 00:29:34.389 [2024-11-20 09:14:59.892793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.389 [2024-11-20 09:14:59.892840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.389 [2024-11-20 09:14:59.892852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.389 [2024-11-20 09:14:59.892860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.389 [2024-11-20 09:14:59.892866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.389 [2024-11-20 09:14:59.892880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.389 qpair failed and we were unable to recover it. 00:29:34.389 [2024-11-20 09:14:59.902846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.389 [2024-11-20 09:14:59.902895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.389 [2024-11-20 09:14:59.902912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.389 [2024-11-20 09:14:59.902920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.389 [2024-11-20 09:14:59.902926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.389 [2024-11-20 09:14:59.902941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.389 qpair failed and we were unable to recover it. 00:29:34.650 [2024-11-20 09:14:59.912830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.650 [2024-11-20 09:14:59.912877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.650 [2024-11-20 09:14:59.912890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.650 [2024-11-20 09:14:59.912898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.650 [2024-11-20 09:14:59.912905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.650 [2024-11-20 09:14:59.912920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.650 qpair failed and we were unable to recover it. 00:29:34.650 [2024-11-20 09:14:59.922903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.650 [2024-11-20 09:14:59.922951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.650 [2024-11-20 09:14:59.922964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.650 [2024-11-20 09:14:59.922971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.650 [2024-11-20 09:14:59.922977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.650 [2024-11-20 09:14:59.922991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.650 qpair failed and we were unable to recover it. 00:29:34.650 [2024-11-20 09:14:59.932852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.650 [2024-11-20 09:14:59.932898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.650 [2024-11-20 09:14:59.932911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.650 [2024-11-20 09:14:59.932918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.650 [2024-11-20 09:14:59.932925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.650 [2024-11-20 09:14:59.932938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.650 qpair failed and we were unable to recover it. 00:29:34.650 [2024-11-20 09:14:59.942834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.650 [2024-11-20 09:14:59.942880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.650 [2024-11-20 09:14:59.942893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.650 [2024-11-20 09:14:59.942900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.650 [2024-11-20 09:14:59.942906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.650 [2024-11-20 09:14:59.942920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.650 qpair failed and we were unable to recover it. 00:29:34.650 [2024-11-20 09:14:59.952938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.650 [2024-11-20 09:14:59.952982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.650 [2024-11-20 09:14:59.952995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.650 [2024-11-20 09:14:59.953002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.650 [2024-11-20 09:14:59.953009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.650 [2024-11-20 09:14:59.953022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.650 qpair failed and we were unable to recover it. 00:29:34.650 [2024-11-20 09:14:59.963036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.650 [2024-11-20 09:14:59.963122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.650 [2024-11-20 09:14:59.963135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.650 [2024-11-20 09:14:59.963142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.650 [2024-11-20 09:14:59.963149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:14:59.963166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:14:59.972980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:14:59.973027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:14:59.973040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:14:59.973047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:14:59.973054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:14:59.973067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:14:59.983040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:14:59.983086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:14:59.983100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:14:59.983107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:14:59.983114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:14:59.983127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:14:59.993054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:14:59.993111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:14:59.993126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:14:59.993133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:14:59.993139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:14:59.993153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:15:00.003108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:15:00.003164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:15:00.003180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:15:00.003191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:15:00.003197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:15:00.003212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:15:00.013078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:15:00.013173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:15:00.013187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:15:00.013194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:15:00.013201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:15:00.013215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:15:00.023024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:15:00.023075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:15:00.023089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:15:00.023096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:15:00.023102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:15:00.023117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:15:00.033173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:15:00.033264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:15:00.033277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:15:00.033285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:15:00.033291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:15:00.033305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:15:00.043243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:15:00.043319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:15:00.043333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:15:00.043340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:15:00.043346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:15:00.043360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:15:00.053238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:15:00.053283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:15:00.053297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:15:00.053304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:15:00.053311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:15:00.053324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:15:00.063264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:15:00.063311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:15:00.063326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:15:00.063333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:15:00.063340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:15:00.063356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:15:00.073267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:15:00.073315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:15:00.073328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:15:00.073335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:15:00.073342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.651 [2024-11-20 09:15:00.073355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.651 qpair failed and we were unable to recover it. 00:29:34.651 [2024-11-20 09:15:00.083360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.651 [2024-11-20 09:15:00.083413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.651 [2024-11-20 09:15:00.083426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.651 [2024-11-20 09:15:00.083433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.651 [2024-11-20 09:15:00.083439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.652 [2024-11-20 09:15:00.083453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.652 qpair failed and we were unable to recover it. 00:29:34.652 [2024-11-20 09:15:00.093290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.652 [2024-11-20 09:15:00.093339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.652 [2024-11-20 09:15:00.093353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.652 [2024-11-20 09:15:00.093360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.652 [2024-11-20 09:15:00.093367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.652 [2024-11-20 09:15:00.093381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.652 qpair failed and we were unable to recover it. 00:29:34.652 [2024-11-20 09:15:00.103373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.652 [2024-11-20 09:15:00.103423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.652 [2024-11-20 09:15:00.103437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.652 [2024-11-20 09:15:00.103444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.652 [2024-11-20 09:15:00.103450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.652 [2024-11-20 09:15:00.103464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.652 qpair failed and we were unable to recover it. 00:29:34.652 [2024-11-20 09:15:00.113379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.652 [2024-11-20 09:15:00.113427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.652 [2024-11-20 09:15:00.113440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.652 [2024-11-20 09:15:00.113447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.652 [2024-11-20 09:15:00.113453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.652 [2024-11-20 09:15:00.113467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.652 qpair failed and we were unable to recover it. 00:29:34.652 [2024-11-20 09:15:00.123420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.652 [2024-11-20 09:15:00.123471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.652 [2024-11-20 09:15:00.123484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.652 [2024-11-20 09:15:00.123491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.652 [2024-11-20 09:15:00.123498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.652 [2024-11-20 09:15:00.123511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.652 qpair failed and we were unable to recover it. 00:29:34.652 [2024-11-20 09:15:00.133473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.652 [2024-11-20 09:15:00.133533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.652 [2024-11-20 09:15:00.133546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.652 [2024-11-20 09:15:00.133557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.652 [2024-11-20 09:15:00.133563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.652 [2024-11-20 09:15:00.133577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.652 qpair failed and we were unable to recover it. 00:29:34.652 [2024-11-20 09:15:00.143491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.652 [2024-11-20 09:15:00.143535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.652 [2024-11-20 09:15:00.143548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.652 [2024-11-20 09:15:00.143555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.652 [2024-11-20 09:15:00.143562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.652 [2024-11-20 09:15:00.143575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.652 qpair failed and we were unable to recover it. 00:29:34.652 [2024-11-20 09:15:00.153511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.652 [2024-11-20 09:15:00.153556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.652 [2024-11-20 09:15:00.153569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.652 [2024-11-20 09:15:00.153577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.652 [2024-11-20 09:15:00.153583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.652 [2024-11-20 09:15:00.153597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.652 qpair failed and we were unable to recover it. 00:29:34.652 [2024-11-20 09:15:00.163543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.652 [2024-11-20 09:15:00.163594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.652 [2024-11-20 09:15:00.163608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.652 [2024-11-20 09:15:00.163615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.652 [2024-11-20 09:15:00.163621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.652 [2024-11-20 09:15:00.163635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.652 qpair failed and we were unable to recover it. 00:29:34.652 [2024-11-20 09:15:00.173410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.652 [2024-11-20 09:15:00.173456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.652 [2024-11-20 09:15:00.173469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.652 [2024-11-20 09:15:00.173476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.652 [2024-11-20 09:15:00.173483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.652 [2024-11-20 09:15:00.173497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.652 qpair failed and we were unable to recover it. 00:29:34.913 [2024-11-20 09:15:00.183583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.913 [2024-11-20 09:15:00.183631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.913 [2024-11-20 09:15:00.183645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.913 [2024-11-20 09:15:00.183652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.913 [2024-11-20 09:15:00.183658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.913 [2024-11-20 09:15:00.183672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.913 qpair failed and we were unable to recover it. 00:29:34.913 [2024-11-20 09:15:00.193485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.913 [2024-11-20 09:15:00.193532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.913 [2024-11-20 09:15:00.193545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.913 [2024-11-20 09:15:00.193552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.913 [2024-11-20 09:15:00.193558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.913 [2024-11-20 09:15:00.193572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.913 qpair failed and we were unable to recover it. 00:29:34.913 [2024-11-20 09:15:00.203655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.913 [2024-11-20 09:15:00.203701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.203715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.203722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.203728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.203741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.213700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.213782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.213796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.213803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.213810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.213823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.223675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.223723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.223737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.223744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.223751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.223764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.233700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.233748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.233761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.233768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.233775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.233788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.243757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.243803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.243817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.243824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.243830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.243844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.253726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.253774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.253788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.253795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.253801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.253815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.263787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.263831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.263844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.263854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.263861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.263874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.273836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.273924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.273950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.273958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.273965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.273985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.283836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.283886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.283912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.283920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.283927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.283947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.293864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.293916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.293931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.293938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.293945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.293960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.303819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.303870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.303884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.303891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.303897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.303911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.313905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.313959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.313984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.313993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.314000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.314019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.324012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.914 [2024-11-20 09:15:00.324084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.914 [2024-11-20 09:15:00.324100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.914 [2024-11-20 09:15:00.324107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.914 [2024-11-20 09:15:00.324114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.914 [2024-11-20 09:15:00.324129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.914 qpair failed and we were unable to recover it. 00:29:34.914 [2024-11-20 09:15:00.333987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.915 [2024-11-20 09:15:00.334037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.915 [2024-11-20 09:15:00.334050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.915 [2024-11-20 09:15:00.334057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.915 [2024-11-20 09:15:00.334064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.915 [2024-11-20 09:15:00.334078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.915 qpair failed and we were unable to recover it. 00:29:34.915 [2024-11-20 09:15:00.343972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.915 [2024-11-20 09:15:00.344066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.915 [2024-11-20 09:15:00.344079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.915 [2024-11-20 09:15:00.344087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.915 [2024-11-20 09:15:00.344093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.915 [2024-11-20 09:15:00.344107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.915 qpair failed and we were unable to recover it. 00:29:34.915 [2024-11-20 09:15:00.354019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.915 [2024-11-20 09:15:00.354066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.915 [2024-11-20 09:15:00.354080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.915 [2024-11-20 09:15:00.354087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.915 [2024-11-20 09:15:00.354094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.915 [2024-11-20 09:15:00.354107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.915 qpair failed and we were unable to recover it. 00:29:34.915 [2024-11-20 09:15:00.364082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.915 [2024-11-20 09:15:00.364136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.915 [2024-11-20 09:15:00.364150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.915 [2024-11-20 09:15:00.364161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.915 [2024-11-20 09:15:00.364168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.915 [2024-11-20 09:15:00.364182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.915 qpair failed and we were unable to recover it. 00:29:34.915 [2024-11-20 09:15:00.374041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.915 [2024-11-20 09:15:00.374089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.915 [2024-11-20 09:15:00.374103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.915 [2024-11-20 09:15:00.374109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.915 [2024-11-20 09:15:00.374116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.915 [2024-11-20 09:15:00.374130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.915 qpair failed and we were unable to recover it. 00:29:34.915 [2024-11-20 09:15:00.384165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.915 [2024-11-20 09:15:00.384213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.915 [2024-11-20 09:15:00.384227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.915 [2024-11-20 09:15:00.384234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.915 [2024-11-20 09:15:00.384240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.915 [2024-11-20 09:15:00.384254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.915 qpair failed and we were unable to recover it. 00:29:34.915 [2024-11-20 09:15:00.394129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.915 [2024-11-20 09:15:00.394178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.915 [2024-11-20 09:15:00.394192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.915 [2024-11-20 09:15:00.394203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.915 [2024-11-20 09:15:00.394210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.915 [2024-11-20 09:15:00.394223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.915 qpair failed and we were unable to recover it. 00:29:34.915 [2024-11-20 09:15:00.404191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.915 [2024-11-20 09:15:00.404259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.915 [2024-11-20 09:15:00.404272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.915 [2024-11-20 09:15:00.404279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.915 [2024-11-20 09:15:00.404286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.915 [2024-11-20 09:15:00.404300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.915 qpair failed and we were unable to recover it. 00:29:34.915 [2024-11-20 09:15:00.414147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.915 [2024-11-20 09:15:00.414201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.915 [2024-11-20 09:15:00.414214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.915 [2024-11-20 09:15:00.414221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.915 [2024-11-20 09:15:00.414227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.915 [2024-11-20 09:15:00.414241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.915 qpair failed and we were unable to recover it. 00:29:34.915 [2024-11-20 09:15:00.424179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.915 [2024-11-20 09:15:00.424227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.915 [2024-11-20 09:15:00.424240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.915 [2024-11-20 09:15:00.424247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.915 [2024-11-20 09:15:00.424253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.915 [2024-11-20 09:15:00.424267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.915 qpair failed and we were unable to recover it. 00:29:34.915 [2024-11-20 09:15:00.434209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.915 [2024-11-20 09:15:00.434255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.915 [2024-11-20 09:15:00.434268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.915 [2024-11-20 09:15:00.434276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.915 [2024-11-20 09:15:00.434282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:34.915 [2024-11-20 09:15:00.434296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.915 qpair failed and we were unable to recover it. 00:29:35.177 [2024-11-20 09:15:00.444281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.177 [2024-11-20 09:15:00.444325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.177 [2024-11-20 09:15:00.444338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.177 [2024-11-20 09:15:00.444345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.177 [2024-11-20 09:15:00.444351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.177 [2024-11-20 09:15:00.444365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.177 qpair failed and we were unable to recover it. 00:29:35.177 [2024-11-20 09:15:00.454252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.177 [2024-11-20 09:15:00.454299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.177 [2024-11-20 09:15:00.454312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.177 [2024-11-20 09:15:00.454319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.177 [2024-11-20 09:15:00.454326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.177 [2024-11-20 09:15:00.454340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.177 qpair failed and we were unable to recover it. 00:29:35.177 [2024-11-20 09:15:00.464331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.177 [2024-11-20 09:15:00.464425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.177 [2024-11-20 09:15:00.464439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.177 [2024-11-20 09:15:00.464446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.177 [2024-11-20 09:15:00.464452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.177 [2024-11-20 09:15:00.464466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.177 qpair failed and we were unable to recover it. 00:29:35.177 [2024-11-20 09:15:00.474373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.177 [2024-11-20 09:15:00.474437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.177 [2024-11-20 09:15:00.474451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.177 [2024-11-20 09:15:00.474457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.177 [2024-11-20 09:15:00.474464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.177 [2024-11-20 09:15:00.474478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.177 qpair failed and we were unable to recover it. 00:29:35.177 [2024-11-20 09:15:00.484444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.177 [2024-11-20 09:15:00.484500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.177 [2024-11-20 09:15:00.484513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.177 [2024-11-20 09:15:00.484521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.177 [2024-11-20 09:15:00.484527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.177 [2024-11-20 09:15:00.484541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.177 qpair failed and we were unable to recover it. 00:29:35.177 [2024-11-20 09:15:00.494450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.177 [2024-11-20 09:15:00.494495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.177 [2024-11-20 09:15:00.494509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.177 [2024-11-20 09:15:00.494516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.177 [2024-11-20 09:15:00.494523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.177 [2024-11-20 09:15:00.494536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.177 qpair failed and we were unable to recover it. 00:29:35.177 [2024-11-20 09:15:00.504443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.177 [2024-11-20 09:15:00.504491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.177 [2024-11-20 09:15:00.504504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.177 [2024-11-20 09:15:00.504511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.177 [2024-11-20 09:15:00.504518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.177 [2024-11-20 09:15:00.504531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.177 qpair failed and we were unable to recover it. 00:29:35.177 [2024-11-20 09:15:00.514424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.177 [2024-11-20 09:15:00.514471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.177 [2024-11-20 09:15:00.514484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.514491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.514497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.514510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.524506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.524550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.524563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.524574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.524580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.524594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.534503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.534550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.534564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.534571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.534577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.534591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.544404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.544454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.544468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.544475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.544482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.544496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.554551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.554601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.554614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.554621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.554628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.554642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.564650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.564698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.564711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.564718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.564724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.564738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.574606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.574653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.574666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.574673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.574679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.574692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.584637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.584682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.584695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.584702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.584708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.584722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.594614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.594667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.594680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.594687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.594693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.594706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.604725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.604772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.604786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.604793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.604799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.604812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.614727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.614775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.614791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.614799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.614805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.614819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.624763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.624809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.624824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.624831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.624838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.624852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.634743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.634785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.634800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.178 [2024-11-20 09:15:00.634807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.178 [2024-11-20 09:15:00.634814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.178 [2024-11-20 09:15:00.634828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.178 qpair failed and we were unable to recover it. 00:29:35.178 [2024-11-20 09:15:00.644876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.178 [2024-11-20 09:15:00.644942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.178 [2024-11-20 09:15:00.644955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.179 [2024-11-20 09:15:00.644962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.179 [2024-11-20 09:15:00.644969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.179 [2024-11-20 09:15:00.644983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.179 qpair failed and we were unable to recover it. 00:29:35.179 [2024-11-20 09:15:00.654817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.179 [2024-11-20 09:15:00.654898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.179 [2024-11-20 09:15:00.654912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.179 [2024-11-20 09:15:00.654922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.179 [2024-11-20 09:15:00.654928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.179 [2024-11-20 09:15:00.654942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.179 qpair failed and we were unable to recover it. 00:29:35.179 [2024-11-20 09:15:00.664866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.179 [2024-11-20 09:15:00.664931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.179 [2024-11-20 09:15:00.664945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.179 [2024-11-20 09:15:00.664952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.179 [2024-11-20 09:15:00.664958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.179 [2024-11-20 09:15:00.664972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.179 qpair failed and we were unable to recover it. 00:29:35.179 [2024-11-20 09:15:00.674866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.179 [2024-11-20 09:15:00.674914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.179 [2024-11-20 09:15:00.674927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.179 [2024-11-20 09:15:00.674935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.179 [2024-11-20 09:15:00.674941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.179 [2024-11-20 09:15:00.674955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.179 qpair failed and we were unable to recover it. 00:29:35.179 [2024-11-20 09:15:00.684934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.179 [2024-11-20 09:15:00.684980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.179 [2024-11-20 09:15:00.684993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.179 [2024-11-20 09:15:00.685001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.179 [2024-11-20 09:15:00.685007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.179 [2024-11-20 09:15:00.685020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.179 qpair failed and we were unable to recover it. 00:29:35.179 [2024-11-20 09:15:00.694894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.179 [2024-11-20 09:15:00.694968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.179 [2024-11-20 09:15:00.694981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.179 [2024-11-20 09:15:00.694988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.179 [2024-11-20 09:15:00.694995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.179 [2024-11-20 09:15:00.695008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.179 qpair failed and we were unable to recover it. 00:29:35.440 [2024-11-20 09:15:00.704989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.440 [2024-11-20 09:15:00.705038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.440 [2024-11-20 09:15:00.705053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.440 [2024-11-20 09:15:00.705061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.440 [2024-11-20 09:15:00.705068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.440 [2024-11-20 09:15:00.705082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.440 qpair failed and we were unable to recover it. 00:29:35.440 [2024-11-20 09:15:00.715030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.440 [2024-11-20 09:15:00.715076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.440 [2024-11-20 09:15:00.715089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.440 [2024-11-20 09:15:00.715097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.440 [2024-11-20 09:15:00.715103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.440 [2024-11-20 09:15:00.715117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.440 qpair failed and we were unable to recover it. 00:29:35.440 [2024-11-20 09:15:00.725060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.440 [2024-11-20 09:15:00.725109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.440 [2024-11-20 09:15:00.725122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.440 [2024-11-20 09:15:00.725129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.725136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.725149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.735049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.735094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.735107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.735114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.735120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.735133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.745091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.745138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.745155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.745168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.745175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.745189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.755091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.755162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.755176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.755183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.755189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.755203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.765174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.765267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.765280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.765287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.765293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.765307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.775171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.775220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.775233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.775240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.775246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.775260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.785163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.785211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.785224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.785235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.785241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.785255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.795252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.795292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.795305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.795312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.795319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.795333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.805162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.805220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.805233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.805240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.805247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.805260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.815291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.815379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.815392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.815399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.815405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.815418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.825294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.825352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.825365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.825372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.825378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.825391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.835308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.835377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.835389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.835396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.835403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.835416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.845265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.845314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.845327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.441 [2024-11-20 09:15:00.845334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.441 [2024-11-20 09:15:00.845340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.441 [2024-11-20 09:15:00.845354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.441 qpair failed and we were unable to recover it. 00:29:35.441 [2024-11-20 09:15:00.855360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.441 [2024-11-20 09:15:00.855406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.441 [2024-11-20 09:15:00.855419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.442 [2024-11-20 09:15:00.855426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.442 [2024-11-20 09:15:00.855432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.442 [2024-11-20 09:15:00.855446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.442 qpair failed and we were unable to recover it. 00:29:35.442 [2024-11-20 09:15:00.865394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.442 [2024-11-20 09:15:00.865442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.442 [2024-11-20 09:15:00.865454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.442 [2024-11-20 09:15:00.865461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.442 [2024-11-20 09:15:00.865467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.442 [2024-11-20 09:15:00.865481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.442 qpair failed and we were unable to recover it. 00:29:35.442 [2024-11-20 09:15:00.875437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.442 [2024-11-20 09:15:00.875485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.442 [2024-11-20 09:15:00.875501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.442 [2024-11-20 09:15:00.875508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.442 [2024-11-20 09:15:00.875514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb660c0 00:29:35.442 [2024-11-20 09:15:00.875528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.442 qpair failed and we were unable to recover it. 00:29:35.442 [2024-11-20 09:15:00.885435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.442 [2024-11-20 09:15:00.885530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.442 [2024-11-20 09:15:00.885593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.442 [2024-11-20 09:15:00.885619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.442 [2024-11-20 09:15:00.885640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f062c000b90 00:29:35.442 [2024-11-20 09:15:00.885696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.442 qpair failed and we were unable to recover it. 00:29:35.442 [2024-11-20 09:15:00.895391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.442 [2024-11-20 09:15:00.895453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.442 [2024-11-20 09:15:00.895481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.442 [2024-11-20 09:15:00.895496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.442 [2024-11-20 09:15:00.895509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f062c000b90 00:29:35.442 [2024-11-20 09:15:00.895539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.442 qpair failed and we were unable to recover it. 00:29:35.442 [2024-11-20 09:15:00.895704] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:35.442 A controller has encountered a failure and is being reset. 00:29:35.442 [2024-11-20 09:15:00.895842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5be00 (9): Bad file descriptor 00:29:35.442 Controller properly reset. 00:29:35.442 Initializing NVMe Controllers 00:29:35.442 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:35.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:35.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:35.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:35.442 Initialization complete. Launching workers. 00:29:35.442 Starting thread on core 1 00:29:35.442 Starting thread on core 2 00:29:35.442 Starting thread on core 3 00:29:35.442 Starting thread on core 0 00:29:35.442 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:35.442 00:29:35.442 real 0m11.397s 00:29:35.442 user 0m22.040s 00:29:35.442 sys 0m3.705s 00:29:35.442 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.442 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.442 ************************************ 00:29:35.442 END TEST nvmf_target_disconnect_tc2 00:29:35.442 ************************************ 00:29:35.702 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:35.702 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:35.702 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:35.702 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:35.702 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:35.702 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.702 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:35.702 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.702 09:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.702 rmmod nvme_tcp 00:29:35.702 rmmod nvme_fabrics 00:29:35.702 rmmod nvme_keyring 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 883543 ']' 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 883543 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 883543 ']' 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 883543 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883543 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883543' 00:29:35.702 killing process with pid 883543 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 883543 00:29:35.702 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 883543 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.962 09:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.872 09:15:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:37.872 00:29:37.872 real 0m21.846s 00:29:37.872 user 0m49.608s 00:29:37.872 sys 0m10.016s 00:29:37.872 09:15:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.872 09:15:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:37.872 ************************************ 00:29:37.872 END TEST nvmf_target_disconnect 00:29:37.872 ************************************ 00:29:37.872 09:15:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:37.872 00:29:37.872 real 6m33.065s 00:29:37.872 user 11m21.029s 00:29:37.872 sys 2m15.744s 00:29:37.872 09:15:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.872 09:15:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.872 ************************************ 00:29:37.872 END TEST nvmf_host 00:29:37.872 ************************************ 00:29:38.133 09:15:03 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:38.133 09:15:03 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:38.133 09:15:03 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:38.133 09:15:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:38.133 09:15:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.133 09:15:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:38.133 ************************************ 00:29:38.133 START TEST nvmf_target_core_interrupt_mode 00:29:38.133 ************************************ 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:38.133 * Looking for test storage... 00:29:38.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:38.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.133 --rc genhtml_branch_coverage=1 00:29:38.133 --rc genhtml_function_coverage=1 00:29:38.133 --rc genhtml_legend=1 00:29:38.133 --rc geninfo_all_blocks=1 00:29:38.133 --rc geninfo_unexecuted_blocks=1 00:29:38.133 00:29:38.133 ' 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:38.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.133 --rc genhtml_branch_coverage=1 00:29:38.133 --rc genhtml_function_coverage=1 00:29:38.133 --rc genhtml_legend=1 00:29:38.133 --rc geninfo_all_blocks=1 00:29:38.133 --rc geninfo_unexecuted_blocks=1 00:29:38.133 00:29:38.133 ' 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:38.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.133 --rc genhtml_branch_coverage=1 00:29:38.133 --rc genhtml_function_coverage=1 00:29:38.133 --rc genhtml_legend=1 00:29:38.133 --rc geninfo_all_blocks=1 00:29:38.133 --rc geninfo_unexecuted_blocks=1 00:29:38.133 00:29:38.133 ' 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:38.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.133 --rc genhtml_branch_coverage=1 00:29:38.133 --rc genhtml_function_coverage=1 00:29:38.133 --rc genhtml_legend=1 00:29:38.133 --rc geninfo_all_blocks=1 00:29:38.133 --rc geninfo_unexecuted_blocks=1 00:29:38.133 00:29:38.133 ' 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:38.133 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.394 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:38.395 ************************************ 00:29:38.395 START TEST nvmf_abort 00:29:38.395 ************************************ 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:38.395 * Looking for test storage... 00:29:38.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:29:38.395 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.657 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:38.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.657 --rc genhtml_branch_coverage=1 00:29:38.657 --rc genhtml_function_coverage=1 00:29:38.657 --rc genhtml_legend=1 00:29:38.657 --rc geninfo_all_blocks=1 00:29:38.657 --rc geninfo_unexecuted_blocks=1 00:29:38.657 00:29:38.657 ' 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:38.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.658 --rc genhtml_branch_coverage=1 00:29:38.658 --rc genhtml_function_coverage=1 00:29:38.658 --rc genhtml_legend=1 00:29:38.658 --rc geninfo_all_blocks=1 00:29:38.658 --rc geninfo_unexecuted_blocks=1 00:29:38.658 00:29:38.658 ' 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:38.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.658 --rc genhtml_branch_coverage=1 00:29:38.658 --rc genhtml_function_coverage=1 00:29:38.658 --rc genhtml_legend=1 00:29:38.658 --rc geninfo_all_blocks=1 00:29:38.658 --rc geninfo_unexecuted_blocks=1 00:29:38.658 00:29:38.658 ' 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:38.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.658 --rc genhtml_branch_coverage=1 00:29:38.658 --rc genhtml_function_coverage=1 00:29:38.658 --rc genhtml_legend=1 00:29:38.658 --rc geninfo_all_blocks=1 00:29:38.658 --rc geninfo_unexecuted_blocks=1 00:29:38.658 00:29:38.658 ' 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.658 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:46.797 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:46.797 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:46.797 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:46.797 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:46.797 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:46.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:29:46.798 00:29:46.798 --- 10.0.0.2 ping statistics --- 00:29:46.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.798 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:29:46.798 00:29:46.798 --- 10.0.0.1 ping statistics --- 00:29:46.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.798 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=889645 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 889645 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 889645 ']' 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.798 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.798 [2024-11-20 09:15:11.603476] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:46.798 [2024-11-20 09:15:11.604991] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:29:46.798 [2024-11-20 09:15:11.605064] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.798 [2024-11-20 09:15:11.706108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:46.798 [2024-11-20 09:15:11.756949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.798 [2024-11-20 09:15:11.756996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.798 [2024-11-20 09:15:11.757005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.798 [2024-11-20 09:15:11.757013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.798 [2024-11-20 09:15:11.757019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.798 [2024-11-20 09:15:11.758823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.798 [2024-11-20 09:15:11.758985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.798 [2024-11-20 09:15:11.758987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.798 [2024-11-20 09:15:11.835324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:46.798 [2024-11-20 09:15:11.836295] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:46.798 [2024-11-20 09:15:11.836752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:46.798 [2024-11-20 09:15:11.836886] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.059 [2024-11-20 09:15:12.459866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.059 Malloc0 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.059 Delay0 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.059 [2024-11-20 09:15:12.571850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.059 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.319 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.319 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:47.319 [2024-11-20 09:15:12.675327] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:49.865 Initializing NVMe Controllers 00:29:49.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:49.865 controller IO queue size 128 less than required 00:29:49.865 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:49.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:49.865 Initialization complete. Launching workers. 00:29:49.865 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28488 00:29:49.865 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28545, failed to submit 66 00:29:49.865 success 28488, unsuccessful 57, failed 0 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.865 rmmod nvme_tcp 00:29:49.865 rmmod nvme_fabrics 00:29:49.865 rmmod nvme_keyring 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 889645 ']' 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 889645 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 889645 ']' 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 889645 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 889645 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 889645' 00:29:49.865 killing process with pid 889645 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 889645 00:29:49.865 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 889645 00:29:49.865 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:49.865 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:49.865 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:49.865 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:49.865 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:49.866 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:49.866 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:49.866 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.866 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:49.866 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.866 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.866 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.781 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.781 00:29:51.781 real 0m13.470s 00:29:51.781 user 0m10.985s 00:29:51.781 sys 0m7.044s 00:29:51.781 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.781 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:51.781 ************************************ 00:29:51.781 END TEST nvmf_abort 00:29:51.781 ************************************ 00:29:51.781 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:51.781 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:51.781 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.781 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:51.781 ************************************ 00:29:51.781 START TEST nvmf_ns_hotplug_stress 00:29:51.781 ************************************ 00:29:51.781 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:52.042 * Looking for test storage... 00:29:52.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.042 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:52.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.043 --rc genhtml_branch_coverage=1 00:29:52.043 --rc genhtml_function_coverage=1 00:29:52.043 --rc genhtml_legend=1 00:29:52.043 --rc geninfo_all_blocks=1 00:29:52.043 --rc geninfo_unexecuted_blocks=1 00:29:52.043 00:29:52.043 ' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:52.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.043 --rc genhtml_branch_coverage=1 00:29:52.043 --rc genhtml_function_coverage=1 00:29:52.043 --rc genhtml_legend=1 00:29:52.043 --rc geninfo_all_blocks=1 00:29:52.043 --rc geninfo_unexecuted_blocks=1 00:29:52.043 00:29:52.043 ' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:52.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.043 --rc genhtml_branch_coverage=1 00:29:52.043 --rc genhtml_function_coverage=1 00:29:52.043 --rc genhtml_legend=1 00:29:52.043 --rc geninfo_all_blocks=1 00:29:52.043 --rc geninfo_unexecuted_blocks=1 00:29:52.043 00:29:52.043 ' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:52.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.043 --rc genhtml_branch_coverage=1 00:29:52.043 --rc genhtml_function_coverage=1 00:29:52.043 --rc genhtml_legend=1 00:29:52.043 --rc geninfo_all_blocks=1 00:29:52.043 --rc geninfo_unexecuted_blocks=1 00:29:52.043 00:29:52.043 ' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.043 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.252 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:00.253 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:00.253 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:00.253 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:00.253 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:00.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:30:00.253 00:30:00.253 --- 10.0.0.2 ping statistics --- 00:30:00.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.253 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:30:00.253 00:30:00.253 --- 10.0.0.1 ping statistics --- 00:30:00.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.253 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:00.253 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=894355 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 894355 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 894355 ']' 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.254 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:00.254 [2024-11-20 09:15:25.042273] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:00.254 [2024-11-20 09:15:25.043394] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:30:00.254 [2024-11-20 09:15:25.043444] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.254 [2024-11-20 09:15:25.142607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:00.254 [2024-11-20 09:15:25.193733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.254 [2024-11-20 09:15:25.193784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.254 [2024-11-20 09:15:25.193793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.254 [2024-11-20 09:15:25.193800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.254 [2024-11-20 09:15:25.193807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.254 [2024-11-20 09:15:25.195651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.254 [2024-11-20 09:15:25.195809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.254 [2024-11-20 09:15:25.195812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.254 [2024-11-20 09:15:25.273473] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:00.254 [2024-11-20 09:15:25.274436] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:00.254 [2024-11-20 09:15:25.275023] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:00.254 [2024-11-20 09:15:25.275156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:00.514 09:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.514 09:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:00.514 09:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:00.514 09:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.514 09:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:00.514 09:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.515 09:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:00.515 09:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:00.775 [2024-11-20 09:15:26.060692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.775 09:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:00.775 09:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.036 [2024-11-20 09:15:26.421387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.036 09:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:01.297 09:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:01.297 Malloc0 00:30:01.558 09:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:01.558 Delay0 00:30:01.558 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.818 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:02.080 NULL1 00:30:02.080 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:02.080 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=895034 00:30:02.080 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:02.080 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:02.080 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.341 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.600 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:02.600 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:02.860 true 00:30:02.860 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:02.861 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.120 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.120 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:03.120 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:03.381 true 00:30:03.381 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:03.381 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.642 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.907 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:03.907 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:03.907 true 00:30:04.169 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:04.169 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.169 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.428 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:04.428 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:04.687 true 00:30:04.687 09:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:04.688 09:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.947 09:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.947 09:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:04.947 09:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:05.207 true 00:30:05.207 09:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:05.207 09:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.466 09:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.466 09:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:05.466 09:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:05.727 true 00:30:05.727 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:05.727 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.986 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.248 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:06.248 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:06.248 true 00:30:06.248 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:06.248 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.509 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.769 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:06.769 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:06.769 true 00:30:06.769 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:06.769 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.029 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.289 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:07.289 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:07.289 true 00:30:07.550 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:07.550 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.550 09:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.810 09:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:07.810 09:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:08.071 true 00:30:08.071 09:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:08.071 09:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.071 09:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.332 09:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:08.332 09:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:08.592 true 00:30:08.592 09:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:08.592 09:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.853 09:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.853 09:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:08.853 09:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:09.115 true 00:30:09.115 09:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:09.115 09:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.376 09:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.376 09:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:09.376 09:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:09.638 true 00:30:09.638 09:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:09.638 09:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.898 09:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.159 09:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:10.159 09:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:10.159 true 00:30:10.159 09:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:10.159 09:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.419 09:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.679 09:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:10.679 09:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:10.679 true 00:30:10.679 09:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:10.679 09:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.938 09:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.198 09:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:11.198 09:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:11.198 true 00:30:11.458 09:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:11.458 09:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.458 09:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.717 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:11.717 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:11.977 true 00:30:11.977 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:11.977 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.977 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.237 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:12.237 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:12.497 true 00:30:12.497 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:12.497 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.497 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.756 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:12.756 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:13.016 true 00:30:13.016 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:13.016 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.276 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.276 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:13.276 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:13.536 true 00:30:13.536 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:13.536 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.797 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.798 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:13.798 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:14.057 true 00:30:14.057 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:14.057 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.316 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.576 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:14.576 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:14.576 true 00:30:14.576 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:14.576 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.835 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.094 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:15.094 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:15.094 true 00:30:15.094 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:15.094 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.354 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.615 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:15.615 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:15.615 true 00:30:15.615 09:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:15.615 09:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.875 09:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.135 09:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:16.135 09:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:16.396 true 00:30:16.396 09:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:16.396 09:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.396 09:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.657 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:16.657 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:16.917 true 00:30:16.917 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:16.918 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.918 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.178 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:17.178 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:17.439 true 00:30:17.439 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:17.439 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.700 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.700 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:17.700 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:17.959 true 00:30:17.959 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:17.959 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.219 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.219 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:18.219 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:18.479 true 00:30:18.479 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:18.479 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.738 09:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.999 09:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:18.999 09:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:18.999 true 00:30:18.999 09:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:18.999 09:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.259 09:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.519 09:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:19.519 09:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:19.519 true 00:30:19.519 09:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:19.519 09:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.779 09:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.038 09:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:20.038 09:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:20.038 true 00:30:20.298 09:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:20.298 09:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.298 09:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.557 09:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:20.557 09:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:20.817 true 00:30:20.817 09:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:20.817 09:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.817 09:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.076 09:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:21.076 09:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:21.335 true 00:30:21.335 09:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:21.335 09:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.595 09:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.595 09:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:21.595 09:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:21.855 true 00:30:21.855 09:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:21.855 09:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.116 09:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.116 09:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:22.116 09:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:22.376 true 00:30:22.376 09:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:22.376 09:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.636 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.897 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:22.897 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:22.897 true 00:30:22.897 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:22.897 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.157 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.417 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:23.417 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:23.417 true 00:30:23.417 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:23.417 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.676 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.935 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:23.935 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:24.195 true 00:30:24.195 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:24.195 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.195 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.455 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:24.455 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:24.715 true 00:30:24.715 09:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:24.715 09:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.715 09:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.975 09:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:24.975 09:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:25.234 true 00:30:25.234 09:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:25.234 09:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.494 09:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.494 09:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:25.494 09:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:25.754 true 00:30:25.754 09:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:25.754 09:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.015 09:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.015 09:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:26.015 09:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:26.275 true 00:30:26.275 09:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:26.275 09:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.534 09:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.794 09:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:26.794 09:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:26.794 true 00:30:26.794 09:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:26.794 09:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.054 09:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.314 09:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:27.314 09:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:27.314 true 00:30:27.314 09:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:27.314 09:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.573 09:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.833 09:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:27.833 09:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:27.833 true 00:30:27.833 09:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:27.833 09:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.093 09:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.353 09:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:28.353 09:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:28.614 true 00:30:28.614 09:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:28.614 09:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.614 09:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.874 09:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:28.874 09:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:29.134 true 00:30:29.134 09:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:29.134 09:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.393 09:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.393 09:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:29.393 09:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:29.653 true 00:30:29.653 09:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:29.653 09:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.913 09:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.913 09:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:29.913 09:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:30.173 true 00:30:30.173 09:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:30.173 09:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.433 09:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.433 09:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:30.433 09:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:30.694 true 00:30:30.694 09:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:30.694 09:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.954 09:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.215 09:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:31.215 09:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:31.215 true 00:30:31.215 09:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:31.215 09:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.476 09:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.737 09:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:31.737 09:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:31.737 true 00:30:31.737 09:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:31.737 09:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.997 09:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.257 09:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:32.257 09:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:32.257 true 00:30:32.516 09:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:32.516 09:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.516 Initializing NVMe Controllers 00:30:32.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:32.517 Controller IO queue size 128, less than required. 00:30:32.517 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:32.517 Initialization complete. Launching workers. 00:30:32.517 ======================================================== 00:30:32.517 Latency(us) 00:30:32.517 Device Information : IOPS MiB/s Average min max 00:30:32.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30197.46 14.74 4238.68 1116.95 11415.93 00:30:32.517 ======================================================== 00:30:32.517 Total : 30197.46 14.74 4238.68 1116.95 11415.93 00:30:32.517 00:30:32.517 09:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.776 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:32.776 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:33.035 true 00:30:33.035 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 895034 00:30:33.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (895034) - No such process 00:30:33.035 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 895034 00:30:33.035 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.035 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:33.296 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:33.296 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:33.296 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:33.296 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:33.296 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:33.296 null0 00:30:33.556 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:33.556 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:33.556 09:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:33.556 null1 00:30:33.556 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:33.556 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:33.556 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:33.827 null2 00:30:33.827 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:33.827 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:33.827 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:34.087 null3 00:30:34.087 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:34.087 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.087 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:34.087 null4 00:30:34.087 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:34.087 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.087 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:34.348 null5 00:30:34.348 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:34.348 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.348 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:34.609 null6 00:30:34.609 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:34.609 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.609 09:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:34.609 null7 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:34.609 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 901225 901226 901229 901230 901232 901234 901236 901238 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.610 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:34.871 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:34.871 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.871 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:34.871 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:34.871 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:34.871 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:34.871 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:34.871 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.132 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:35.392 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:35.392 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:35.392 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:35.392 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:35.392 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:35.392 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.393 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.653 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:35.653 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:35.653 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:35.653 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:35.653 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:35.653 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:35.653 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:35.653 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:35.653 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.913 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:35.914 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.914 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.914 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:35.914 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.174 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:36.434 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.434 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.434 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:36.434 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:36.434 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.434 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:36.434 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:36.434 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:36.434 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:36.434 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:36.434 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:36.695 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.695 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.695 09:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:36.695 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.956 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:36.956 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:36.956 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:36.956 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:36.956 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:36.956 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.957 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.217 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:37.477 09:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:37.737 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.002 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:38.307 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.307 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.307 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:38.307 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.308 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:38.611 09:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:38.611 rmmod nvme_tcp 00:30:38.611 rmmod nvme_fabrics 00:30:38.611 rmmod nvme_keyring 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 894355 ']' 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 894355 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 894355 ']' 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 894355 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:38.611 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 894355 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 894355' 00:30:38.871 killing process with pid 894355 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 894355 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 894355 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.871 09:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:41.413 00:30:41.413 real 0m49.082s 00:30:41.413 user 3m2.605s 00:30:41.413 sys 0m22.556s 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:41.413 ************************************ 00:30:41.413 END TEST nvmf_ns_hotplug_stress 00:30:41.413 ************************************ 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:41.413 ************************************ 00:30:41.413 START TEST nvmf_delete_subsystem 00:30:41.413 ************************************ 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:41.413 * Looking for test storage... 00:30:41.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.413 --rc genhtml_branch_coverage=1 00:30:41.413 --rc genhtml_function_coverage=1 00:30:41.413 --rc genhtml_legend=1 00:30:41.413 --rc geninfo_all_blocks=1 00:30:41.413 --rc geninfo_unexecuted_blocks=1 00:30:41.413 00:30:41.413 ' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.413 --rc genhtml_branch_coverage=1 00:30:41.413 --rc genhtml_function_coverage=1 00:30:41.413 --rc genhtml_legend=1 00:30:41.413 --rc geninfo_all_blocks=1 00:30:41.413 --rc geninfo_unexecuted_blocks=1 00:30:41.413 00:30:41.413 ' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.413 --rc genhtml_branch_coverage=1 00:30:41.413 --rc genhtml_function_coverage=1 00:30:41.413 --rc genhtml_legend=1 00:30:41.413 --rc geninfo_all_blocks=1 00:30:41.413 --rc geninfo_unexecuted_blocks=1 00:30:41.413 00:30:41.413 ' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:41.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.413 --rc genhtml_branch_coverage=1 00:30:41.413 --rc genhtml_function_coverage=1 00:30:41.413 --rc genhtml_legend=1 00:30:41.413 --rc geninfo_all_blocks=1 00:30:41.413 --rc geninfo_unexecuted_blocks=1 00:30:41.413 00:30:41.413 ' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:41.413 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.414 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:41.414 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:41.414 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:41.414 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.414 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.414 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.414 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:41.414 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:41.414 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:41.414 09:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:49.548 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:49.549 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:49.549 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:49.549 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:49.549 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.549 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:49.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:30:49.549 00:30:49.549 --- 10.0.0.2 ping statistics --- 00:30:49.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.549 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:30:49.549 00:30:49.549 --- 10.0.0.1 ping statistics --- 00:30:49.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.549 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=906388 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 906388 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 906388 ']' 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.549 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.550 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.550 09:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.550 [2024-11-20 09:16:14.257022] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:49.550 [2024-11-20 09:16:14.258173] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:30:49.550 [2024-11-20 09:16:14.258227] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.550 [2024-11-20 09:16:14.360442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:49.550 [2024-11-20 09:16:14.411997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.550 [2024-11-20 09:16:14.412057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.550 [2024-11-20 09:16:14.412065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.550 [2024-11-20 09:16:14.412073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.550 [2024-11-20 09:16:14.412079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.550 [2024-11-20 09:16:14.413781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.550 [2024-11-20 09:16:14.413786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.550 [2024-11-20 09:16:14.490263] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:49.550 [2024-11-20 09:16:14.491063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:49.550 [2024-11-20 09:16:14.491278] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.811 [2024-11-20 09:16:15.162765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.811 [2024-11-20 09:16:15.195279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.811 NULL1 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.811 Delay0 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=906696 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:49.811 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:49.812 [2024-11-20 09:16:15.321665] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:51.728 09:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:51.728 09:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.728 09:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 [2024-11-20 09:16:17.487895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15642c0 is same with the state(6) to be set 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Write completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.990 starting I/O failed: -6 00:30:51.990 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 Write completed with error (sct=0, sc=8) 00:30:51.991 starting I/O failed: -6 00:30:51.991 Read completed with error (sct=0, sc=8) 00:30:51.991 [2024-11-20 09:16:17.491548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdea8000c40 is same with the state(6) to be set 00:30:53.377 [2024-11-20 09:16:18.463425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15659a0 is same with the state(6) to be set 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 [2024-11-20 09:16:18.491130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15644a0 is same with the state(6) to be set 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Read completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.377 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 [2024-11-20 09:16:18.491651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1564860 is same with the state(6) to be set 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 [2024-11-20 09:16:18.493241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdea800d7c0 is same with the state(6) to be set 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Write completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 Read completed with error (sct=0, sc=8) 00:30:53.378 [2024-11-20 09:16:18.493809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdea800d020 is same with the state(6) to be set 00:30:53.378 Initializing NVMe Controllers 00:30:53.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:53.378 Controller IO queue size 128, less than required. 00:30:53.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:53.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:53.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:53.378 Initialization complete. Launching workers. 00:30:53.378 ======================================================== 00:30:53.378 Latency(us) 00:30:53.378 Device Information : IOPS MiB/s Average min max 00:30:53.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.16 0.08 904646.42 372.59 1008346.99 00:30:53.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.62 0.08 926371.32 413.53 1011579.48 00:30:53.378 ======================================================== 00:30:53.378 Total : 336.78 0.16 915717.46 372.59 1011579.48 00:30:53.378 00:30:53.378 [2024-11-20 09:16:18.494448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15659a0 (9): Bad file descriptor 00:30:53.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:53.378 09:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.378 09:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:53.378 09:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 906696 00:30:53.378 09:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:53.640 09:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:53.640 09:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 906696 00:30:53.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (906696) - No such process 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 906696 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 906696 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 906696 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:53.640 [2024-11-20 09:16:19.027082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=907413 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907413 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:53.640 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:53.640 [2024-11-20 09:16:19.125828] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:54.209 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:54.209 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907413 00:30:54.209 09:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:54.782 09:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:54.782 09:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907413 00:30:54.782 09:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:55.043 09:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:55.043 09:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907413 00:30:55.043 09:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:55.614 09:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:55.614 09:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907413 00:30:55.614 09:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:56.184 09:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:56.184 09:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907413 00:30:56.184 09:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:56.754 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:56.754 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907413 00:30:56.754 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:56.754 Initializing NVMe Controllers 00:30:56.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:56.754 Controller IO queue size 128, less than required. 00:30:56.754 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:56.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:56.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:56.754 Initialization complete. Launching workers. 00:30:56.754 ======================================================== 00:30:56.754 Latency(us) 00:30:56.754 Device Information : IOPS MiB/s Average min max 00:30:56.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002608.99 1000225.90 1006942.86 00:30:56.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004345.82 1000519.47 1010871.46 00:30:56.754 ======================================================== 00:30:56.754 Total : 256.00 0.12 1003477.41 1000225.90 1010871.46 00:30:56.754 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907413 00:30:57.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (907413) - No such process 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 907413 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:57.323 rmmod nvme_tcp 00:30:57.323 rmmod nvme_fabrics 00:30:57.323 rmmod nvme_keyring 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 906388 ']' 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 906388 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 906388 ']' 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 906388 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 906388 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 906388' 00:30:57.323 killing process with pid 906388 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 906388 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 906388 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.323 09:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.865 09:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.865 00:30:59.865 real 0m18.453s 00:30:59.865 user 0m26.806s 00:30:59.865 sys 0m7.292s 00:30:59.865 09:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:59.865 09:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.865 ************************************ 00:30:59.865 END TEST nvmf_delete_subsystem 00:30:59.865 ************************************ 00:30:59.865 09:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:59.865 09:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:59.865 09:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:59.865 09:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:59.865 ************************************ 00:30:59.865 START TEST nvmf_host_management 00:30:59.865 ************************************ 00:30:59.865 09:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:59.865 * Looking for test storage... 00:30:59.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:59.865 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:59.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.866 --rc genhtml_branch_coverage=1 00:30:59.866 --rc genhtml_function_coverage=1 00:30:59.866 --rc genhtml_legend=1 00:30:59.866 --rc geninfo_all_blocks=1 00:30:59.866 --rc geninfo_unexecuted_blocks=1 00:30:59.866 00:30:59.866 ' 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:59.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.866 --rc genhtml_branch_coverage=1 00:30:59.866 --rc genhtml_function_coverage=1 00:30:59.866 --rc genhtml_legend=1 00:30:59.866 --rc geninfo_all_blocks=1 00:30:59.866 --rc geninfo_unexecuted_blocks=1 00:30:59.866 00:30:59.866 ' 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:59.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.866 --rc genhtml_branch_coverage=1 00:30:59.866 --rc genhtml_function_coverage=1 00:30:59.866 --rc genhtml_legend=1 00:30:59.866 --rc geninfo_all_blocks=1 00:30:59.866 --rc geninfo_unexecuted_blocks=1 00:30:59.866 00:30:59.866 ' 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:59.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.866 --rc genhtml_branch_coverage=1 00:30:59.866 --rc genhtml_function_coverage=1 00:30:59.866 --rc genhtml_legend=1 00:30:59.866 --rc geninfo_all_blocks=1 00:30:59.866 --rc geninfo_unexecuted_blocks=1 00:30:59.866 00:30:59.866 ' 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.866 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.867 09:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:08.013 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:08.013 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.013 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:08.014 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:08.014 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:08.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:31:08.014 00:31:08.014 --- 10.0.0.2 ping statistics --- 00:31:08.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.014 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:31:08.014 00:31:08.014 --- 10.0.0.1 ping statistics --- 00:31:08.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.014 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=912120 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 912120 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 912120 ']' 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.014 09:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.014 [2024-11-20 09:16:32.818811] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:08.014 [2024-11-20 09:16:32.819924] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:31:08.014 [2024-11-20 09:16:32.819973] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.014 [2024-11-20 09:16:32.920028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.014 [2024-11-20 09:16:32.973511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.014 [2024-11-20 09:16:32.973562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.014 [2024-11-20 09:16:32.973571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.014 [2024-11-20 09:16:32.973578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.014 [2024-11-20 09:16:32.973586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.014 [2024-11-20 09:16:32.975934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:08.014 [2024-11-20 09:16:32.976092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:08.014 [2024-11-20 09:16:32.976226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:08.014 [2024-11-20 09:16:32.976255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.014 [2024-11-20 09:16:33.053387] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:08.014 [2024-11-20 09:16:33.054334] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:08.014 [2024-11-20 09:16:33.054534] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:08.014 [2024-11-20 09:16:33.054941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:08.015 [2024-11-20 09:16:33.054988] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:08.274 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:08.274 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:08.274 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:08.274 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:08.274 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.274 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.274 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:08.274 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.274 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.274 [2024-11-20 09:16:33.681424] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.274 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.274 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:08.275 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.275 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.275 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:08.275 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:08.275 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:08.275 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.275 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.275 Malloc0 00:31:08.275 [2024-11-20 09:16:33.781742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.275 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.275 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:08.275 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:08.275 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=912466 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 912466 /var/tmp/bdevperf.sock 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 912466 ']' 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:08.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:08.535 { 00:31:08.535 "params": { 00:31:08.535 "name": "Nvme$subsystem", 00:31:08.535 "trtype": "$TEST_TRANSPORT", 00:31:08.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:08.535 "adrfam": "ipv4", 00:31:08.535 "trsvcid": "$NVMF_PORT", 00:31:08.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:08.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:08.535 "hdgst": ${hdgst:-false}, 00:31:08.535 "ddgst": ${ddgst:-false} 00:31:08.535 }, 00:31:08.535 "method": "bdev_nvme_attach_controller" 00:31:08.535 } 00:31:08.535 EOF 00:31:08.535 )") 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:08.535 09:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:08.535 "params": { 00:31:08.535 "name": "Nvme0", 00:31:08.535 "trtype": "tcp", 00:31:08.535 "traddr": "10.0.0.2", 00:31:08.535 "adrfam": "ipv4", 00:31:08.535 "trsvcid": "4420", 00:31:08.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:08.535 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:08.535 "hdgst": false, 00:31:08.535 "ddgst": false 00:31:08.535 }, 00:31:08.535 "method": "bdev_nvme_attach_controller" 00:31:08.535 }' 00:31:08.535 [2024-11-20 09:16:33.892667] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:31:08.535 [2024-11-20 09:16:33.892735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912466 ] 00:31:08.535 [2024-11-20 09:16:33.987312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.535 [2024-11-20 09:16:34.039825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.117 Running I/O for 10 seconds... 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=525 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 525 -ge 100 ']' 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.379 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:09.379 [2024-11-20 09:16:34.797100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.379 [2024-11-20 09:16:34.797170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.379 [2024-11-20 09:16:34.797181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.379 [2024-11-20 09:16:34.797189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.797542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12402a0 is same with the state(6) to be set 00:31:09.380 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.380 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:09.380 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.380 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:09.380 [2024-11-20 09:16:34.810027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.380 [2024-11-20 09:16:34.810083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.380 [2024-11-20 09:16:34.810095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.380 [2024-11-20 09:16:34.810104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.380 [2024-11-20 09:16:34.810112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.380 [2024-11-20 09:16:34.810120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.380 [2024-11-20 09:16:34.810129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.380 [2024-11-20 09:16:34.810136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.380 [2024-11-20 09:16:34.810144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ca000 is same with the state(6) to be set 00:31:09.380 [2024-11-20 09:16:34.810235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.380 [2024-11-20 09:16:34.810247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.380 [2024-11-20 09:16:34.810265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.380 [2024-11-20 09:16:34.810273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.380 [2024-11-20 09:16:34.810283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.380 [2024-11-20 09:16:34.810298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.381 [2024-11-20 09:16:34.810846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.381 [2024-11-20 09:16:34.810857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.810865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.810875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.810882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.810892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.810899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.810909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.810917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.810927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.810935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.810944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.810952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.810961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.810969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.810983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.810991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.811401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.382 [2024-11-20 09:16:34.811409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.382 [2024-11-20 09:16:34.812688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:09.382 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.382 task offset: 81792 on job bdev=Nvme0n1 fails 00:31:09.382 00:31:09.382 Latency(us) 00:31:09.382 [2024-11-20T08:16:34.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.382 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:09.382 Job: Nvme0n1 ended in about 0.44 seconds with error 00:31:09.382 Verification LBA range: start 0x0 length 0x400 00:31:09.382 Nvme0n1 : 0.44 1438.66 89.92 144.09 0.00 39246.58 1897.81 37573.97 00:31:09.383 [2024-11-20T08:16:34.912Z] =================================================================================================================== 00:31:09.383 [2024-11-20T08:16:34.912Z] Total : 1438.66 89.92 144.09 0.00 39246.58 1897.81 37573.97 00:31:09.383 09:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:09.383 [2024-11-20 09:16:34.814888] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:09.383 [2024-11-20 09:16:34.814926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ca000 (9): Bad file descriptor 00:31:09.383 [2024-11-20 09:16:34.821200] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:10.323 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 912466 00:31:10.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (912466) - No such process 00:31:10.323 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:10.323 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:10.323 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:10.323 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:10.323 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:10.323 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:10.323 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:10.323 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:10.323 { 00:31:10.323 "params": { 00:31:10.323 "name": "Nvme$subsystem", 00:31:10.324 "trtype": "$TEST_TRANSPORT", 00:31:10.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.324 "adrfam": "ipv4", 00:31:10.324 "trsvcid": "$NVMF_PORT", 00:31:10.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.324 "hdgst": ${hdgst:-false}, 00:31:10.324 "ddgst": ${ddgst:-false} 00:31:10.324 }, 00:31:10.324 "method": "bdev_nvme_attach_controller" 00:31:10.324 } 00:31:10.324 EOF 00:31:10.324 )") 00:31:10.324 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:10.324 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:10.324 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:10.324 09:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:10.324 "params": { 00:31:10.324 "name": "Nvme0", 00:31:10.324 "trtype": "tcp", 00:31:10.324 "traddr": "10.0.0.2", 00:31:10.324 "adrfam": "ipv4", 00:31:10.324 "trsvcid": "4420", 00:31:10.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.324 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.324 "hdgst": false, 00:31:10.324 "ddgst": false 00:31:10.324 }, 00:31:10.324 "method": "bdev_nvme_attach_controller" 00:31:10.324 }' 00:31:10.584 [2024-11-20 09:16:35.878889] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:31:10.584 [2024-11-20 09:16:35.878962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912821 ] 00:31:10.584 [2024-11-20 09:16:35.972078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.584 [2024-11-20 09:16:36.016057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.843 Running I/O for 1 seconds... 00:31:11.783 2014.00 IOPS, 125.88 MiB/s 00:31:11.783 Latency(us) 00:31:11.783 [2024-11-20T08:16:37.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.783 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:11.783 Verification LBA range: start 0x0 length 0x400 00:31:11.783 Nvme0n1 : 1.01 2062.81 128.93 0.00 0.00 30356.44 546.13 32331.09 00:31:11.783 [2024-11-20T08:16:37.312Z] =================================================================================================================== 00:31:11.783 [2024-11-20T08:16:37.312Z] Total : 2062.81 128.93 0.00 0.00 30356.44 546.13 32331.09 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.044 rmmod nvme_tcp 00:31:12.044 rmmod nvme_fabrics 00:31:12.044 rmmod nvme_keyring 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 912120 ']' 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 912120 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 912120 ']' 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 912120 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 912120 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 912120' 00:31:12.044 killing process with pid 912120 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 912120 00:31:12.044 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 912120 00:31:12.044 [2024-11-20 09:16:37.569808] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.304 09:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.215 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.215 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:14.215 00:31:14.215 real 0m14.698s 00:31:14.215 user 0m19.416s 00:31:14.215 sys 0m7.368s 00:31:14.215 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.215 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:14.215 ************************************ 00:31:14.215 END TEST nvmf_host_management 00:31:14.215 ************************************ 00:31:14.215 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:14.215 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:14.215 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.215 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:14.477 ************************************ 00:31:14.477 START TEST nvmf_lvol 00:31:14.477 ************************************ 00:31:14.477 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:14.478 * Looking for test storage... 00:31:14.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:14.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.478 --rc genhtml_branch_coverage=1 00:31:14.478 --rc genhtml_function_coverage=1 00:31:14.478 --rc genhtml_legend=1 00:31:14.478 --rc geninfo_all_blocks=1 00:31:14.478 --rc geninfo_unexecuted_blocks=1 00:31:14.478 00:31:14.478 ' 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:14.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.478 --rc genhtml_branch_coverage=1 00:31:14.478 --rc genhtml_function_coverage=1 00:31:14.478 --rc genhtml_legend=1 00:31:14.478 --rc geninfo_all_blocks=1 00:31:14.478 --rc geninfo_unexecuted_blocks=1 00:31:14.478 00:31:14.478 ' 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:14.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.478 --rc genhtml_branch_coverage=1 00:31:14.478 --rc genhtml_function_coverage=1 00:31:14.478 --rc genhtml_legend=1 00:31:14.478 --rc geninfo_all_blocks=1 00:31:14.478 --rc geninfo_unexecuted_blocks=1 00:31:14.478 00:31:14.478 ' 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:14.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.478 --rc genhtml_branch_coverage=1 00:31:14.478 --rc genhtml_function_coverage=1 00:31:14.478 --rc genhtml_legend=1 00:31:14.478 --rc geninfo_all_blocks=1 00:31:14.478 --rc geninfo_unexecuted_blocks=1 00:31:14.478 00:31:14.478 ' 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.478 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.479 09:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.479 09:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.741 09:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:14.741 09:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:14.741 09:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:14.741 09:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:22.886 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:22.886 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:22.886 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:22.886 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.886 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:22.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:31:22.887 00:31:22.887 --- 10.0.0.2 ping statistics --- 00:31:22.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.887 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:31:22.887 00:31:22.887 --- 10.0.0.1 ping statistics --- 00:31:22.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.887 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=917274 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 917274 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 917274 ']' 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:22.887 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:22.887 [2024-11-20 09:16:47.554950] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:22.887 [2024-11-20 09:16:47.556076] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:31:22.887 [2024-11-20 09:16:47.556127] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.887 [2024-11-20 09:16:47.658518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:22.887 [2024-11-20 09:16:47.711251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.887 [2024-11-20 09:16:47.711305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.887 [2024-11-20 09:16:47.711314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.887 [2024-11-20 09:16:47.711322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.887 [2024-11-20 09:16:47.711330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.887 [2024-11-20 09:16:47.713507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.887 [2024-11-20 09:16:47.713732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.887 [2024-11-20 09:16:47.713734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.887 [2024-11-20 09:16:47.790597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:22.887 [2024-11-20 09:16:47.791647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:22.887 [2024-11-20 09:16:47.791880] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:22.887 [2024-11-20 09:16:47.792046] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:22.887 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:22.887 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:22.887 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:22.887 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:22.887 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:23.148 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.148 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:23.148 [2024-11-20 09:16:48.610804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.148 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:23.410 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:23.410 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:23.670 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:23.670 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:23.930 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:24.191 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=591939ba-bb1b-4f16-a5d5-a7f051a594bb 00:31:24.191 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 591939ba-bb1b-4f16-a5d5-a7f051a594bb lvol 20 00:31:24.191 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=568d8ff4-0aa7-4c3f-a2f9-ef7f41b2928d 00:31:24.191 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:24.452 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 568d8ff4-0aa7-4c3f-a2f9-ef7f41b2928d 00:31:24.713 09:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:24.713 [2024-11-20 09:16:50.178774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.713 09:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:24.974 09:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=917878 00:31:24.974 09:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:24.974 09:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:25.918 09:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 568d8ff4-0aa7-4c3f-a2f9-ef7f41b2928d MY_SNAPSHOT 00:31:26.180 09:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8ce53557-dde9-457f-87b5-a42094ab12f2 00:31:26.180 09:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 568d8ff4-0aa7-4c3f-a2f9-ef7f41b2928d 30 00:31:26.440 09:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8ce53557-dde9-457f-87b5-a42094ab12f2 MY_CLONE 00:31:26.701 09:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ff2cb742-06eb-43b5-b938-9eaa55d97124 00:31:26.701 09:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ff2cb742-06eb-43b5-b938-9eaa55d97124 00:31:27.272 09:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 917878 00:31:35.404 Initializing NVMe Controllers 00:31:35.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:35.404 Controller IO queue size 128, less than required. 00:31:35.404 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:35.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:35.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:35.404 Initialization complete. Launching workers. 00:31:35.404 ======================================================== 00:31:35.404 Latency(us) 00:31:35.404 Device Information : IOPS MiB/s Average min max 00:31:35.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15345.30 59.94 8343.78 4288.33 71931.79 00:31:35.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15091.90 58.95 8481.34 3995.41 54863.65 00:31:35.404 ======================================================== 00:31:35.404 Total : 30437.20 118.90 8411.99 3995.41 71931.79 00:31:35.404 00:31:35.404 09:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:35.404 09:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 568d8ff4-0aa7-4c3f-a2f9-ef7f41b2928d 00:31:35.664 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 591939ba-bb1b-4f16-a5d5-a7f051a594bb 00:31:35.924 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:35.924 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:35.924 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.925 rmmod nvme_tcp 00:31:35.925 rmmod nvme_fabrics 00:31:35.925 rmmod nvme_keyring 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 917274 ']' 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 917274 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 917274 ']' 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 917274 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 917274 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 917274' 00:31:35.925 killing process with pid 917274 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 917274 00:31:35.925 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 917274 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.185 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.097 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:38.097 00:31:38.097 real 0m23.796s 00:31:38.097 user 0m55.325s 00:31:38.097 sys 0m10.881s 00:31:38.097 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:38.097 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:38.097 ************************************ 00:31:38.097 END TEST nvmf_lvol 00:31:38.097 ************************************ 00:31:38.097 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:38.097 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:38.097 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:38.097 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:38.359 ************************************ 00:31:38.359 START TEST nvmf_lvs_grow 00:31:38.359 ************************************ 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:38.359 * Looking for test storage... 00:31:38.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.359 --rc genhtml_branch_coverage=1 00:31:38.359 --rc genhtml_function_coverage=1 00:31:38.359 --rc genhtml_legend=1 00:31:38.359 --rc geninfo_all_blocks=1 00:31:38.359 --rc geninfo_unexecuted_blocks=1 00:31:38.359 00:31:38.359 ' 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.359 --rc genhtml_branch_coverage=1 00:31:38.359 --rc genhtml_function_coverage=1 00:31:38.359 --rc genhtml_legend=1 00:31:38.359 --rc geninfo_all_blocks=1 00:31:38.359 --rc geninfo_unexecuted_blocks=1 00:31:38.359 00:31:38.359 ' 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.359 --rc genhtml_branch_coverage=1 00:31:38.359 --rc genhtml_function_coverage=1 00:31:38.359 --rc genhtml_legend=1 00:31:38.359 --rc geninfo_all_blocks=1 00:31:38.359 --rc geninfo_unexecuted_blocks=1 00:31:38.359 00:31:38.359 ' 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.359 --rc genhtml_branch_coverage=1 00:31:38.359 --rc genhtml_function_coverage=1 00:31:38.359 --rc genhtml_legend=1 00:31:38.359 --rc geninfo_all_blocks=1 00:31:38.359 --rc geninfo_unexecuted_blocks=1 00:31:38.359 00:31:38.359 ' 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:38.359 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.360 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.360 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.360 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:38.360 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:38.360 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:38.360 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:38.360 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:38.621 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.894 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:46.895 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:46.895 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:46.895 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:46.895 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.895 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:46.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:31:46.895 00:31:46.895 --- 10.0.0.2 ping statistics --- 00:31:46.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.895 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:31:46.895 00:31:46.895 --- 10.0.0.1 ping statistics --- 00:31:46.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.895 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=924172 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 924172 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 924172 ']' 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.895 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.896 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.896 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.896 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:46.896 [2024-11-20 09:17:11.381451] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:46.896 [2024-11-20 09:17:11.382586] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:31:46.896 [2024-11-20 09:17:11.382637] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.896 [2024-11-20 09:17:11.480712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.896 [2024-11-20 09:17:11.531399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.896 [2024-11-20 09:17:11.531451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.896 [2024-11-20 09:17:11.531460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.896 [2024-11-20 09:17:11.531467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.896 [2024-11-20 09:17:11.531473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.896 [2024-11-20 09:17:11.532233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.896 [2024-11-20 09:17:11.608149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:46.896 [2024-11-20 09:17:11.608454] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:46.896 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.896 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:31:46.896 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:46.896 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.896 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:46.896 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.896 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:46.896 [2024-11-20 09:17:12.401112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:47.156 ************************************ 00:31:47.156 START TEST lvs_grow_clean 00:31:47.156 ************************************ 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:47.156 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:47.417 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:47.417 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:47.417 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=75059663-8f83-4f72-b7ae-f4cba29997ba 00:31:47.417 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75059663-8f83-4f72-b7ae-f4cba29997ba 00:31:47.417 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:47.677 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:47.677 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:47.677 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 75059663-8f83-4f72-b7ae-f4cba29997ba lvol 150 00:31:47.937 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b90609aa-ca38-4b06-a202-4675c21d3828 00:31:47.937 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:47.937 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:47.937 [2024-11-20 09:17:13.420792] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:47.937 [2024-11-20 09:17:13.420958] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:47.937 true 00:31:47.937 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75059663-8f83-4f72-b7ae-f4cba29997ba 00:31:47.937 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:48.197 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:48.197 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:48.458 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b90609aa-ca38-4b06-a202-4675c21d3828 00:31:48.458 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:48.718 [2024-11-20 09:17:14.137502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.718 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:48.979 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=924618 00:31:48.979 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:48.979 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:48.979 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 924618 /var/tmp/bdevperf.sock 00:31:48.979 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 924618 ']' 00:31:48.979 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:48.979 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.979 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:48.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:48.979 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.979 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:48.979 [2024-11-20 09:17:14.391010] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:31:48.979 [2024-11-20 09:17:14.391083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924618 ] 00:31:48.979 [2024-11-20 09:17:14.482275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.239 [2024-11-20 09:17:14.535343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.808 09:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.808 09:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:49.808 09:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:50.069 Nvme0n1 00:31:50.069 09:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:50.328 [ 00:31:50.328 { 00:31:50.328 "name": "Nvme0n1", 00:31:50.328 "aliases": [ 00:31:50.328 "b90609aa-ca38-4b06-a202-4675c21d3828" 00:31:50.328 ], 00:31:50.328 "product_name": "NVMe disk", 00:31:50.328 "block_size": 4096, 00:31:50.328 "num_blocks": 38912, 00:31:50.328 "uuid": "b90609aa-ca38-4b06-a202-4675c21d3828", 00:31:50.328 "numa_id": 0, 00:31:50.328 "assigned_rate_limits": { 00:31:50.328 "rw_ios_per_sec": 0, 00:31:50.328 "rw_mbytes_per_sec": 0, 00:31:50.328 "r_mbytes_per_sec": 0, 00:31:50.328 "w_mbytes_per_sec": 0 00:31:50.328 }, 00:31:50.328 "claimed": false, 00:31:50.328 "zoned": false, 00:31:50.328 "supported_io_types": { 00:31:50.328 "read": true, 00:31:50.328 "write": true, 00:31:50.328 "unmap": true, 00:31:50.328 "flush": true, 00:31:50.328 "reset": true, 00:31:50.328 "nvme_admin": true, 00:31:50.328 "nvme_io": true, 00:31:50.328 "nvme_io_md": false, 00:31:50.328 "write_zeroes": true, 00:31:50.328 "zcopy": false, 00:31:50.328 "get_zone_info": false, 00:31:50.328 "zone_management": false, 00:31:50.328 "zone_append": false, 00:31:50.328 "compare": true, 00:31:50.328 "compare_and_write": true, 00:31:50.328 "abort": true, 00:31:50.328 "seek_hole": false, 00:31:50.328 "seek_data": false, 00:31:50.328 "copy": true, 00:31:50.328 "nvme_iov_md": false 00:31:50.328 }, 00:31:50.328 "memory_domains": [ 00:31:50.328 { 00:31:50.328 "dma_device_id": "system", 00:31:50.328 "dma_device_type": 1 00:31:50.328 } 00:31:50.328 ], 00:31:50.328 "driver_specific": { 00:31:50.328 "nvme": [ 00:31:50.328 { 00:31:50.328 "trid": { 00:31:50.329 "trtype": "TCP", 00:31:50.329 "adrfam": "IPv4", 00:31:50.329 "traddr": "10.0.0.2", 00:31:50.329 "trsvcid": "4420", 00:31:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:50.329 }, 00:31:50.329 "ctrlr_data": { 00:31:50.329 "cntlid": 1, 00:31:50.329 "vendor_id": "0x8086", 00:31:50.329 "model_number": "SPDK bdev Controller", 00:31:50.329 "serial_number": "SPDK0", 00:31:50.329 "firmware_revision": "25.01", 00:31:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.329 "oacs": { 00:31:50.329 "security": 0, 00:31:50.329 "format": 0, 00:31:50.329 "firmware": 0, 00:31:50.329 "ns_manage": 0 00:31:50.329 }, 00:31:50.329 "multi_ctrlr": true, 00:31:50.329 "ana_reporting": false 00:31:50.329 }, 00:31:50.329 "vs": { 00:31:50.329 "nvme_version": "1.3" 00:31:50.329 }, 00:31:50.329 "ns_data": { 00:31:50.329 "id": 1, 00:31:50.329 "can_share": true 00:31:50.329 } 00:31:50.329 } 00:31:50.329 ], 00:31:50.329 "mp_policy": "active_passive" 00:31:50.329 } 00:31:50.329 } 00:31:50.329 ] 00:31:50.329 09:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:50.329 09:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=924947 00:31:50.329 09:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:50.329 Running I/O for 10 seconds... 00:31:51.712 Latency(us) 00:31:51.712 [2024-11-20T08:17:17.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.712 Nvme0n1 : 1.00 16798.00 65.62 0.00 0.00 0.00 0.00 0.00 00:31:51.712 [2024-11-20T08:17:17.241Z] =================================================================================================================== 00:31:51.712 [2024-11-20T08:17:17.241Z] Total : 16798.00 65.62 0.00 0.00 0.00 0.00 0.00 00:31:51.712 00:31:52.282 09:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 75059663-8f83-4f72-b7ae-f4cba29997ba 00:31:52.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:52.543 Nvme0n1 : 2.00 17098.50 66.79 0.00 0.00 0.00 0.00 0.00 00:31:52.543 [2024-11-20T08:17:18.072Z] =================================================================================================================== 00:31:52.543 [2024-11-20T08:17:18.072Z] Total : 17098.50 66.79 0.00 0.00 0.00 0.00 0.00 00:31:52.543 00:31:52.543 true 00:31:52.543 09:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75059663-8f83-4f72-b7ae-f4cba29997ba 00:31:52.543 09:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:52.803 09:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:52.803 09:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:52.803 09:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 924947 00:31:53.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.374 Nvme0n1 : 3.00 17283.33 67.51 0.00 0.00 0.00 0.00 0.00 00:31:53.374 [2024-11-20T08:17:18.903Z] =================================================================================================================== 00:31:53.374 [2024-11-20T08:17:18.903Z] Total : 17283.33 67.51 0.00 0.00 0.00 0.00 0.00 00:31:53.374 00:31:54.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.759 Nvme0n1 : 4.00 18142.00 70.87 0.00 0.00 0.00 0.00 0.00 00:31:54.759 [2024-11-20T08:17:20.288Z] =================================================================================================================== 00:31:54.759 [2024-11-20T08:17:20.288Z] Total : 18142.00 70.87 0.00 0.00 0.00 0.00 0.00 00:31:54.759 00:31:55.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.331 Nvme0n1 : 5.00 19597.00 76.55 0.00 0.00 0.00 0.00 0.00 00:31:55.331 [2024-11-20T08:17:20.860Z] =================================================================================================================== 00:31:55.331 [2024-11-20T08:17:20.860Z] Total : 19597.00 76.55 0.00 0.00 0.00 0.00 0.00 00:31:55.331 00:31:56.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.714 Nvme0n1 : 6.00 20585.33 80.41 0.00 0.00 0.00 0.00 0.00 00:31:56.714 [2024-11-20T08:17:22.243Z] =================================================================================================================== 00:31:56.714 [2024-11-20T08:17:22.243Z] Total : 20585.33 80.41 0.00 0.00 0.00 0.00 0.00 00:31:56.714 00:31:57.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.655 Nvme0n1 : 7.00 21291.29 83.17 0.00 0.00 0.00 0.00 0.00 00:31:57.655 [2024-11-20T08:17:23.184Z] =================================================================================================================== 00:31:57.655 [2024-11-20T08:17:23.184Z] Total : 21291.29 83.17 0.00 0.00 0.00 0.00 0.00 00:31:57.655 00:31:58.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:58.595 Nvme0n1 : 8.00 21820.75 85.24 0.00 0.00 0.00 0.00 0.00 00:31:58.595 [2024-11-20T08:17:24.124Z] =================================================================================================================== 00:31:58.595 [2024-11-20T08:17:24.124Z] Total : 21820.75 85.24 0.00 0.00 0.00 0.00 0.00 00:31:58.595 00:31:59.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.534 Nvme0n1 : 9.00 22232.56 86.85 0.00 0.00 0.00 0.00 0.00 00:31:59.534 [2024-11-20T08:17:25.063Z] =================================================================================================================== 00:31:59.534 [2024-11-20T08:17:25.063Z] Total : 22232.56 86.85 0.00 0.00 0.00 0.00 0.00 00:31:59.534 00:32:00.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.474 Nvme0n1 : 10.00 22562.00 88.13 0.00 0.00 0.00 0.00 0.00 00:32:00.474 [2024-11-20T08:17:26.003Z] =================================================================================================================== 00:32:00.474 [2024-11-20T08:17:26.003Z] Total : 22562.00 88.13 0.00 0.00 0.00 0.00 0.00 00:32:00.474 00:32:00.474 00:32:00.474 Latency(us) 00:32:00.474 [2024-11-20T08:17:26.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.474 Nvme0n1 : 10.00 22564.16 88.14 0.00 0.00 5669.93 2990.08 32331.09 00:32:00.474 [2024-11-20T08:17:26.003Z] =================================================================================================================== 00:32:00.474 [2024-11-20T08:17:26.003Z] Total : 22564.16 88.14 0.00 0.00 5669.93 2990.08 32331.09 00:32:00.474 { 00:32:00.474 "results": [ 00:32:00.474 { 00:32:00.474 "job": "Nvme0n1", 00:32:00.474 "core_mask": "0x2", 00:32:00.474 "workload": "randwrite", 00:32:00.474 "status": "finished", 00:32:00.474 "queue_depth": 128, 00:32:00.474 "io_size": 4096, 00:32:00.474 "runtime": 10.004716, 00:32:00.474 "iops": 22564.158742736927, 00:32:00.474 "mibps": 88.14124508881612, 00:32:00.474 "io_failed": 0, 00:32:00.474 "io_timeout": 0, 00:32:00.474 "avg_latency_us": 5669.927814258967, 00:32:00.474 "min_latency_us": 2990.08, 00:32:00.474 "max_latency_us": 32331.093333333334 00:32:00.474 } 00:32:00.474 ], 00:32:00.474 "core_count": 1 00:32:00.474 } 00:32:00.474 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 924618 00:32:00.474 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 924618 ']' 00:32:00.474 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 924618 00:32:00.474 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:00.474 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.474 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 924618 00:32:00.474 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:00.474 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:00.474 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 924618' 00:32:00.474 killing process with pid 924618 00:32:00.474 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 924618 00:32:00.474 Received shutdown signal, test time was about 10.000000 seconds 00:32:00.474 00:32:00.474 Latency(us) 00:32:00.474 [2024-11-20T08:17:26.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.474 [2024-11-20T08:17:26.003Z] =================================================================================================================== 00:32:00.474 [2024-11-20T08:17:26.003Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:00.474 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 924618 00:32:00.735 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:00.735 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:00.995 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75059663-8f83-4f72-b7ae-f4cba29997ba 00:32:00.995 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:01.255 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:01.255 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:01.255 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:01.255 [2024-11-20 09:17:26.732863] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:01.255 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75059663-8f83-4f72-b7ae-f4cba29997ba 00:32:01.255 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:01.255 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75059663-8f83-4f72-b7ae-f4cba29997ba 00:32:01.255 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:01.255 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:01.255 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:01.255 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:01.255 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:01.515 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:01.515 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:01.515 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:01.515 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75059663-8f83-4f72-b7ae-f4cba29997ba 00:32:01.515 request: 00:32:01.515 { 00:32:01.515 "uuid": "75059663-8f83-4f72-b7ae-f4cba29997ba", 00:32:01.515 "method": "bdev_lvol_get_lvstores", 00:32:01.515 "req_id": 1 00:32:01.515 } 00:32:01.515 Got JSON-RPC error response 00:32:01.515 response: 00:32:01.515 { 00:32:01.515 "code": -19, 00:32:01.515 "message": "No such device" 00:32:01.515 } 00:32:01.515 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:01.515 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:01.515 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:01.515 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:01.515 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:01.776 aio_bdev 00:32:01.776 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b90609aa-ca38-4b06-a202-4675c21d3828 00:32:01.776 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b90609aa-ca38-4b06-a202-4675c21d3828 00:32:01.776 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:01.776 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:01.776 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:01.776 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:01.776 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:02.037 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b90609aa-ca38-4b06-a202-4675c21d3828 -t 2000 00:32:02.037 [ 00:32:02.037 { 00:32:02.037 "name": "b90609aa-ca38-4b06-a202-4675c21d3828", 00:32:02.037 "aliases": [ 00:32:02.037 "lvs/lvol" 00:32:02.037 ], 00:32:02.037 "product_name": "Logical Volume", 00:32:02.037 "block_size": 4096, 00:32:02.037 "num_blocks": 38912, 00:32:02.037 "uuid": "b90609aa-ca38-4b06-a202-4675c21d3828", 00:32:02.037 "assigned_rate_limits": { 00:32:02.037 "rw_ios_per_sec": 0, 00:32:02.037 "rw_mbytes_per_sec": 0, 00:32:02.037 "r_mbytes_per_sec": 0, 00:32:02.037 "w_mbytes_per_sec": 0 00:32:02.037 }, 00:32:02.037 "claimed": false, 00:32:02.037 "zoned": false, 00:32:02.037 "supported_io_types": { 00:32:02.037 "read": true, 00:32:02.037 "write": true, 00:32:02.037 "unmap": true, 00:32:02.037 "flush": false, 00:32:02.037 "reset": true, 00:32:02.037 "nvme_admin": false, 00:32:02.037 "nvme_io": false, 00:32:02.037 "nvme_io_md": false, 00:32:02.037 "write_zeroes": true, 00:32:02.037 "zcopy": false, 00:32:02.037 "get_zone_info": false, 00:32:02.037 "zone_management": false, 00:32:02.037 "zone_append": false, 00:32:02.037 "compare": false, 00:32:02.037 "compare_and_write": false, 00:32:02.037 "abort": false, 00:32:02.037 "seek_hole": true, 00:32:02.037 "seek_data": true, 00:32:02.037 "copy": false, 00:32:02.037 "nvme_iov_md": false 00:32:02.037 }, 00:32:02.037 "driver_specific": { 00:32:02.037 "lvol": { 00:32:02.037 "lvol_store_uuid": "75059663-8f83-4f72-b7ae-f4cba29997ba", 00:32:02.037 "base_bdev": "aio_bdev", 00:32:02.037 "thin_provision": false, 00:32:02.037 "num_allocated_clusters": 38, 00:32:02.037 "snapshot": false, 00:32:02.037 "clone": false, 00:32:02.037 "esnap_clone": false 00:32:02.037 } 00:32:02.037 } 00:32:02.037 } 00:32:02.037 ] 00:32:02.037 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:02.037 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75059663-8f83-4f72-b7ae-f4cba29997ba 00:32:02.037 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:02.297 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:02.297 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75059663-8f83-4f72-b7ae-f4cba29997ba 00:32:02.297 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:02.557 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:02.557 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b90609aa-ca38-4b06-a202-4675c21d3828 00:32:02.557 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 75059663-8f83-4f72-b7ae-f4cba29997ba 00:32:02.818 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:03.078 00:32:03.078 real 0m15.941s 00:32:03.078 user 0m15.595s 00:32:03.078 sys 0m1.454s 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:03.078 ************************************ 00:32:03.078 END TEST lvs_grow_clean 00:32:03.078 ************************************ 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:03.078 ************************************ 00:32:03.078 START TEST lvs_grow_dirty 00:32:03.078 ************************************ 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:03.078 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:03.338 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:03.338 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:03.598 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:03.598 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:03.598 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:03.598 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:03.598 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:03.598 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 lvol 150 00:32:03.870 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=12261ef6-56ac-4b9e-8f68-a16a51e7fe20 00:32:03.870 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:03.870 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:03.870 [2024-11-20 09:17:29.372759] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:03.870 [2024-11-20 09:17:29.372899] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:03.870 true 00:32:03.870 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:03.870 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:04.131 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:04.131 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:04.391 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 12261ef6-56ac-4b9e-8f68-a16a51e7fe20 00:32:04.391 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:04.651 [2024-11-20 09:17:30.025336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.651 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:04.911 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=927682 00:32:04.911 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:04.911 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:04.911 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 927682 /var/tmp/bdevperf.sock 00:32:04.911 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 927682 ']' 00:32:04.911 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:04.911 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:04.911 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:04.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:04.911 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:04.911 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:04.911 [2024-11-20 09:17:30.266877] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:32:04.911 [2024-11-20 09:17:30.266944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927682 ] 00:32:04.911 [2024-11-20 09:17:30.350887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.911 [2024-11-20 09:17:30.382046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.851 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.851 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:05.851 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:06.111 Nvme0n1 00:32:06.111 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:06.111 [ 00:32:06.111 { 00:32:06.111 "name": "Nvme0n1", 00:32:06.111 "aliases": [ 00:32:06.111 "12261ef6-56ac-4b9e-8f68-a16a51e7fe20" 00:32:06.111 ], 00:32:06.111 "product_name": "NVMe disk", 00:32:06.111 "block_size": 4096, 00:32:06.111 "num_blocks": 38912, 00:32:06.111 "uuid": "12261ef6-56ac-4b9e-8f68-a16a51e7fe20", 00:32:06.111 "numa_id": 0, 00:32:06.111 "assigned_rate_limits": { 00:32:06.111 "rw_ios_per_sec": 0, 00:32:06.111 "rw_mbytes_per_sec": 0, 00:32:06.111 "r_mbytes_per_sec": 0, 00:32:06.111 "w_mbytes_per_sec": 0 00:32:06.111 }, 00:32:06.111 "claimed": false, 00:32:06.111 "zoned": false, 00:32:06.111 "supported_io_types": { 00:32:06.111 "read": true, 00:32:06.111 "write": true, 00:32:06.111 "unmap": true, 00:32:06.111 "flush": true, 00:32:06.111 "reset": true, 00:32:06.111 "nvme_admin": true, 00:32:06.111 "nvme_io": true, 00:32:06.111 "nvme_io_md": false, 00:32:06.111 "write_zeroes": true, 00:32:06.111 "zcopy": false, 00:32:06.111 "get_zone_info": false, 00:32:06.111 "zone_management": false, 00:32:06.111 "zone_append": false, 00:32:06.111 "compare": true, 00:32:06.111 "compare_and_write": true, 00:32:06.111 "abort": true, 00:32:06.111 "seek_hole": false, 00:32:06.111 "seek_data": false, 00:32:06.111 "copy": true, 00:32:06.111 "nvme_iov_md": false 00:32:06.111 }, 00:32:06.111 "memory_domains": [ 00:32:06.111 { 00:32:06.111 "dma_device_id": "system", 00:32:06.111 "dma_device_type": 1 00:32:06.111 } 00:32:06.111 ], 00:32:06.111 "driver_specific": { 00:32:06.111 "nvme": [ 00:32:06.111 { 00:32:06.111 "trid": { 00:32:06.111 "trtype": "TCP", 00:32:06.111 "adrfam": "IPv4", 00:32:06.111 "traddr": "10.0.0.2", 00:32:06.111 "trsvcid": "4420", 00:32:06.111 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:06.111 }, 00:32:06.111 "ctrlr_data": { 00:32:06.111 "cntlid": 1, 00:32:06.111 "vendor_id": "0x8086", 00:32:06.111 "model_number": "SPDK bdev Controller", 00:32:06.111 "serial_number": "SPDK0", 00:32:06.111 "firmware_revision": "25.01", 00:32:06.111 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.111 "oacs": { 00:32:06.111 "security": 0, 00:32:06.111 "format": 0, 00:32:06.111 "firmware": 0, 00:32:06.111 "ns_manage": 0 00:32:06.111 }, 00:32:06.111 "multi_ctrlr": true, 00:32:06.111 "ana_reporting": false 00:32:06.111 }, 00:32:06.111 "vs": { 00:32:06.111 "nvme_version": "1.3" 00:32:06.111 }, 00:32:06.111 "ns_data": { 00:32:06.111 "id": 1, 00:32:06.111 "can_share": true 00:32:06.111 } 00:32:06.111 } 00:32:06.111 ], 00:32:06.111 "mp_policy": "active_passive" 00:32:06.111 } 00:32:06.111 } 00:32:06.111 ] 00:32:06.111 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:06.111 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=928012 00:32:06.111 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:06.371 Running I/O for 10 seconds... 00:32:07.310 Latency(us) 00:32:07.310 [2024-11-20T08:17:32.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.310 Nvme0n1 : 1.00 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:32:07.310 [2024-11-20T08:17:32.839Z] =================================================================================================================== 00:32:07.310 [2024-11-20T08:17:32.839Z] Total : 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:32:07.310 00:32:08.249 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:08.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.249 Nvme0n1 : 2.00 17748.50 69.33 0.00 0.00 0.00 0.00 0.00 00:32:08.249 [2024-11-20T08:17:33.778Z] =================================================================================================================== 00:32:08.249 [2024-11-20T08:17:33.778Z] Total : 17748.50 69.33 0.00 0.00 0.00 0.00 0.00 00:32:08.249 00:32:08.508 true 00:32:08.508 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:08.508 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:08.508 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:08.508 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:08.508 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 928012 00:32:09.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.447 Nvme0n1 : 3.00 17864.67 69.78 0.00 0.00 0.00 0.00 0.00 00:32:09.447 [2024-11-20T08:17:34.976Z] =================================================================================================================== 00:32:09.447 [2024-11-20T08:17:34.976Z] Total : 17864.67 69.78 0.00 0.00 0.00 0.00 0.00 00:32:09.447 00:32:10.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.385 Nvme0n1 : 4.00 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:32:10.385 [2024-11-20T08:17:35.914Z] =================================================================================================================== 00:32:10.385 [2024-11-20T08:17:35.914Z] Total : 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:32:10.385 00:32:11.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.324 Nvme0n1 : 5.00 19253.20 75.21 0.00 0.00 0.00 0.00 0.00 00:32:11.324 [2024-11-20T08:17:36.853Z] =================================================================================================================== 00:32:11.324 [2024-11-20T08:17:36.854Z] Total : 19253.20 75.21 0.00 0.00 0.00 0.00 0.00 00:32:11.325 00:32:12.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.263 Nvme0n1 : 6.00 20298.83 79.29 0.00 0.00 0.00 0.00 0.00 00:32:12.263 [2024-11-20T08:17:37.792Z] =================================================================================================================== 00:32:12.263 [2024-11-20T08:17:37.792Z] Total : 20298.83 79.29 0.00 0.00 0.00 0.00 0.00 00:32:12.263 00:32:13.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:13.203 Nvme0n1 : 7.00 21045.71 82.21 0.00 0.00 0.00 0.00 0.00 00:32:13.203 [2024-11-20T08:17:38.732Z] =================================================================================================================== 00:32:13.203 [2024-11-20T08:17:38.732Z] Total : 21045.71 82.21 0.00 0.00 0.00 0.00 0.00 00:32:13.203 00:32:14.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:14.583 Nvme0n1 : 8.00 21605.88 84.40 0.00 0.00 0.00 0.00 0.00 00:32:14.583 [2024-11-20T08:17:40.112Z] =================================================================================================================== 00:32:14.583 [2024-11-20T08:17:40.112Z] Total : 21605.88 84.40 0.00 0.00 0.00 0.00 0.00 00:32:14.583 00:32:15.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:15.522 Nvme0n1 : 9.00 22041.56 86.10 0.00 0.00 0.00 0.00 0.00 00:32:15.522 [2024-11-20T08:17:41.051Z] =================================================================================================================== 00:32:15.522 [2024-11-20T08:17:41.051Z] Total : 22041.56 86.10 0.00 0.00 0.00 0.00 0.00 00:32:15.522 00:32:16.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:16.462 Nvme0n1 : 10.00 22390.10 87.46 0.00 0.00 0.00 0.00 0.00 00:32:16.462 [2024-11-20T08:17:41.991Z] =================================================================================================================== 00:32:16.462 [2024-11-20T08:17:41.991Z] Total : 22390.10 87.46 0.00 0.00 0.00 0.00 0.00 00:32:16.462 00:32:16.462 00:32:16.463 Latency(us) 00:32:16.463 [2024-11-20T08:17:41.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:16.463 Nvme0n1 : 10.00 22396.22 87.49 0.00 0.00 5712.33 4669.44 31675.73 00:32:16.463 [2024-11-20T08:17:41.992Z] =================================================================================================================== 00:32:16.463 [2024-11-20T08:17:41.992Z] Total : 22396.22 87.49 0.00 0.00 5712.33 4669.44 31675.73 00:32:16.463 { 00:32:16.463 "results": [ 00:32:16.463 { 00:32:16.463 "job": "Nvme0n1", 00:32:16.463 "core_mask": "0x2", 00:32:16.463 "workload": "randwrite", 00:32:16.463 "status": "finished", 00:32:16.463 "queue_depth": 128, 00:32:16.463 "io_size": 4096, 00:32:16.463 "runtime": 10.002981, 00:32:16.463 "iops": 22396.223685719287, 00:32:16.463 "mibps": 87.48524877234097, 00:32:16.463 "io_failed": 0, 00:32:16.463 "io_timeout": 0, 00:32:16.463 "avg_latency_us": 5712.332670264414, 00:32:16.463 "min_latency_us": 4669.44, 00:32:16.463 "max_latency_us": 31675.733333333334 00:32:16.463 } 00:32:16.463 ], 00:32:16.463 "core_count": 1 00:32:16.463 } 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 927682 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 927682 ']' 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 927682 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 927682 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 927682' 00:32:16.463 killing process with pid 927682 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 927682 00:32:16.463 Received shutdown signal, test time was about 10.000000 seconds 00:32:16.463 00:32:16.463 Latency(us) 00:32:16.463 [2024-11-20T08:17:41.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.463 [2024-11-20T08:17:41.992Z] =================================================================================================================== 00:32:16.463 [2024-11-20T08:17:41.992Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 927682 00:32:16.463 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:16.722 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:16.722 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:16.722 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 924172 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 924172 00:32:16.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 924172 Killed "${NVMF_APP[@]}" "$@" 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=930034 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 930034 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 930034 ']' 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:16.982 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:17.242 [2024-11-20 09:17:42.512793] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:17.242 [2024-11-20 09:17:42.513795] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:32:17.242 [2024-11-20 09:17:42.513837] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.242 [2024-11-20 09:17:42.606740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.242 [2024-11-20 09:17:42.637301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.242 [2024-11-20 09:17:42.637327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.242 [2024-11-20 09:17:42.637332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.242 [2024-11-20 09:17:42.637340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.242 [2024-11-20 09:17:42.637344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.242 [2024-11-20 09:17:42.637787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.242 [2024-11-20 09:17:42.687908] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:17.242 [2024-11-20 09:17:42.688099] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:17.811 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.811 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:17.811 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:17.811 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.811 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:18.072 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.072 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:18.072 [2024-11-20 09:17:43.516317] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:18.072 [2024-11-20 09:17:43.516566] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:18.072 [2024-11-20 09:17:43.516658] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:18.072 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:18.072 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 12261ef6-56ac-4b9e-8f68-a16a51e7fe20 00:32:18.072 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=12261ef6-56ac-4b9e-8f68-a16a51e7fe20 00:32:18.072 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:18.072 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:18.072 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:18.072 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:18.072 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:18.332 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 12261ef6-56ac-4b9e-8f68-a16a51e7fe20 -t 2000 00:32:18.592 [ 00:32:18.592 { 00:32:18.592 "name": "12261ef6-56ac-4b9e-8f68-a16a51e7fe20", 00:32:18.592 "aliases": [ 00:32:18.592 "lvs/lvol" 00:32:18.592 ], 00:32:18.592 "product_name": "Logical Volume", 00:32:18.592 "block_size": 4096, 00:32:18.592 "num_blocks": 38912, 00:32:18.592 "uuid": "12261ef6-56ac-4b9e-8f68-a16a51e7fe20", 00:32:18.592 "assigned_rate_limits": { 00:32:18.592 "rw_ios_per_sec": 0, 00:32:18.592 "rw_mbytes_per_sec": 0, 00:32:18.592 "r_mbytes_per_sec": 0, 00:32:18.592 "w_mbytes_per_sec": 0 00:32:18.592 }, 00:32:18.592 "claimed": false, 00:32:18.592 "zoned": false, 00:32:18.592 "supported_io_types": { 00:32:18.592 "read": true, 00:32:18.592 "write": true, 00:32:18.592 "unmap": true, 00:32:18.592 "flush": false, 00:32:18.592 "reset": true, 00:32:18.592 "nvme_admin": false, 00:32:18.592 "nvme_io": false, 00:32:18.592 "nvme_io_md": false, 00:32:18.592 "write_zeroes": true, 00:32:18.592 "zcopy": false, 00:32:18.592 "get_zone_info": false, 00:32:18.592 "zone_management": false, 00:32:18.592 "zone_append": false, 00:32:18.592 "compare": false, 00:32:18.592 "compare_and_write": false, 00:32:18.592 "abort": false, 00:32:18.592 "seek_hole": true, 00:32:18.592 "seek_data": true, 00:32:18.592 "copy": false, 00:32:18.592 "nvme_iov_md": false 00:32:18.592 }, 00:32:18.592 "driver_specific": { 00:32:18.592 "lvol": { 00:32:18.592 "lvol_store_uuid": "50a4d807-d67b-4ba3-9b03-98101749ecf9", 00:32:18.592 "base_bdev": "aio_bdev", 00:32:18.592 "thin_provision": false, 00:32:18.592 "num_allocated_clusters": 38, 00:32:18.592 "snapshot": false, 00:32:18.592 "clone": false, 00:32:18.592 "esnap_clone": false 00:32:18.592 } 00:32:18.592 } 00:32:18.592 } 00:32:18.592 ] 00:32:18.592 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:18.592 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:18.592 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:18.592 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:18.592 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:18.592 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:18.852 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:18.852 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:19.112 [2024-11-20 09:17:44.426342] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:19.112 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:19.112 request: 00:32:19.112 { 00:32:19.112 "uuid": "50a4d807-d67b-4ba3-9b03-98101749ecf9", 00:32:19.112 "method": "bdev_lvol_get_lvstores", 00:32:19.112 "req_id": 1 00:32:19.112 } 00:32:19.112 Got JSON-RPC error response 00:32:19.112 response: 00:32:19.112 { 00:32:19.112 "code": -19, 00:32:19.112 "message": "No such device" 00:32:19.112 } 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:19.372 aio_bdev 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 12261ef6-56ac-4b9e-8f68-a16a51e7fe20 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=12261ef6-56ac-4b9e-8f68-a16a51e7fe20 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:19.372 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:19.632 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 12261ef6-56ac-4b9e-8f68-a16a51e7fe20 -t 2000 00:32:19.892 [ 00:32:19.892 { 00:32:19.892 "name": "12261ef6-56ac-4b9e-8f68-a16a51e7fe20", 00:32:19.892 "aliases": [ 00:32:19.892 "lvs/lvol" 00:32:19.892 ], 00:32:19.892 "product_name": "Logical Volume", 00:32:19.892 "block_size": 4096, 00:32:19.892 "num_blocks": 38912, 00:32:19.892 "uuid": "12261ef6-56ac-4b9e-8f68-a16a51e7fe20", 00:32:19.892 "assigned_rate_limits": { 00:32:19.892 "rw_ios_per_sec": 0, 00:32:19.892 "rw_mbytes_per_sec": 0, 00:32:19.892 "r_mbytes_per_sec": 0, 00:32:19.892 "w_mbytes_per_sec": 0 00:32:19.892 }, 00:32:19.892 "claimed": false, 00:32:19.892 "zoned": false, 00:32:19.892 "supported_io_types": { 00:32:19.892 "read": true, 00:32:19.892 "write": true, 00:32:19.892 "unmap": true, 00:32:19.892 "flush": false, 00:32:19.892 "reset": true, 00:32:19.892 "nvme_admin": false, 00:32:19.892 "nvme_io": false, 00:32:19.892 "nvme_io_md": false, 00:32:19.892 "write_zeroes": true, 00:32:19.892 "zcopy": false, 00:32:19.892 "get_zone_info": false, 00:32:19.892 "zone_management": false, 00:32:19.892 "zone_append": false, 00:32:19.892 "compare": false, 00:32:19.892 "compare_and_write": false, 00:32:19.892 "abort": false, 00:32:19.892 "seek_hole": true, 00:32:19.892 "seek_data": true, 00:32:19.892 "copy": false, 00:32:19.892 "nvme_iov_md": false 00:32:19.892 }, 00:32:19.892 "driver_specific": { 00:32:19.892 "lvol": { 00:32:19.892 "lvol_store_uuid": "50a4d807-d67b-4ba3-9b03-98101749ecf9", 00:32:19.892 "base_bdev": "aio_bdev", 00:32:19.892 "thin_provision": false, 00:32:19.892 "num_allocated_clusters": 38, 00:32:19.892 "snapshot": false, 00:32:19.892 "clone": false, 00:32:19.892 "esnap_clone": false 00:32:19.892 } 00:32:19.892 } 00:32:19.892 } 00:32:19.892 ] 00:32:19.892 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:19.892 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:19.892 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:19.892 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:19.892 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:19.892 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:20.152 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:20.152 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 12261ef6-56ac-4b9e-8f68-a16a51e7fe20 00:32:20.412 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50a4d807-d67b-4ba3-9b03-98101749ecf9 00:32:20.412 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:20.672 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:20.672 00:32:20.672 real 0m17.611s 00:32:20.672 user 0m35.545s 00:32:20.672 sys 0m3.042s 00:32:20.672 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:20.672 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:20.672 ************************************ 00:32:20.672 END TEST lvs_grow_dirty 00:32:20.672 ************************************ 00:32:20.672 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:20.672 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:20.672 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:20.672 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:20.672 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:20.672 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:20.672 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:20.673 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:20.673 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:20.673 nvmf_trace.0 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.932 rmmod nvme_tcp 00:32:20.932 rmmod nvme_fabrics 00:32:20.932 rmmod nvme_keyring 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 930034 ']' 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 930034 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 930034 ']' 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 930034 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 930034 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:20.932 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:20.933 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 930034' 00:32:20.933 killing process with pid 930034 00:32:20.933 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 930034 00:32:20.933 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 930034 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:21.193 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.103 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:23.103 00:32:23.103 real 0m44.916s 00:32:23.103 user 0m54.188s 00:32:23.103 sys 0m10.555s 00:32:23.103 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.103 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:23.103 ************************************ 00:32:23.103 END TEST nvmf_lvs_grow 00:32:23.103 ************************************ 00:32:23.103 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:23.103 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:23.103 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:23.103 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:23.364 ************************************ 00:32:23.364 START TEST nvmf_bdev_io_wait 00:32:23.364 ************************************ 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:23.364 * Looking for test storage... 00:32:23.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:23.364 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:23.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.365 --rc genhtml_branch_coverage=1 00:32:23.365 --rc genhtml_function_coverage=1 00:32:23.365 --rc genhtml_legend=1 00:32:23.365 --rc geninfo_all_blocks=1 00:32:23.365 --rc geninfo_unexecuted_blocks=1 00:32:23.365 00:32:23.365 ' 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:23.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.365 --rc genhtml_branch_coverage=1 00:32:23.365 --rc genhtml_function_coverage=1 00:32:23.365 --rc genhtml_legend=1 00:32:23.365 --rc geninfo_all_blocks=1 00:32:23.365 --rc geninfo_unexecuted_blocks=1 00:32:23.365 00:32:23.365 ' 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:23.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.365 --rc genhtml_branch_coverage=1 00:32:23.365 --rc genhtml_function_coverage=1 00:32:23.365 --rc genhtml_legend=1 00:32:23.365 --rc geninfo_all_blocks=1 00:32:23.365 --rc geninfo_unexecuted_blocks=1 00:32:23.365 00:32:23.365 ' 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:23.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.365 --rc genhtml_branch_coverage=1 00:32:23.365 --rc genhtml_function_coverage=1 00:32:23.365 --rc genhtml_legend=1 00:32:23.365 --rc geninfo_all_blocks=1 00:32:23.365 --rc geninfo_unexecuted_blocks=1 00:32:23.365 00:32:23.365 ' 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:23.365 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:23.626 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:31.784 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:31.784 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:31.784 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:31.784 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.784 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:31.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:32:31.785 00:32:31.785 --- 10.0.0.2 ping statistics --- 00:32:31.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.785 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:32:31.785 00:32:31.785 --- 10.0.0.1 ping statistics --- 00:32:31.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.785 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=935092 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 935092 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 935092 ']' 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.785 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:31.785 [2024-11-20 09:17:56.477954] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:31.785 [2024-11-20 09:17:56.479067] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:32:31.785 [2024-11-20 09:17:56.479114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.785 [2024-11-20 09:17:56.577623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:31.785 [2024-11-20 09:17:56.632303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.785 [2024-11-20 09:17:56.632354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.785 [2024-11-20 09:17:56.632363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.785 [2024-11-20 09:17:56.632371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.785 [2024-11-20 09:17:56.632377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.785 [2024-11-20 09:17:56.634732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.785 [2024-11-20 09:17:56.634893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:31.785 [2024-11-20 09:17:56.635058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.785 [2024-11-20 09:17:56.635057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:31.785 [2024-11-20 09:17:56.635415] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:31.785 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.785 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:31.785 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.785 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.785 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:32.085 [2024-11-20 09:17:57.404245] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:32.085 [2024-11-20 09:17:57.404975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:32.085 [2024-11-20 09:17:57.405088] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:32.085 [2024-11-20 09:17:57.405239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:32.085 [2024-11-20 09:17:57.415926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:32.085 Malloc0 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:32.085 [2024-11-20 09:17:57.492222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=935149 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=935151 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:32.085 { 00:32:32.085 "params": { 00:32:32.085 "name": "Nvme$subsystem", 00:32:32.085 "trtype": "$TEST_TRANSPORT", 00:32:32.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.085 "adrfam": "ipv4", 00:32:32.085 "trsvcid": "$NVMF_PORT", 00:32:32.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.085 "hdgst": ${hdgst:-false}, 00:32:32.085 "ddgst": ${ddgst:-false} 00:32:32.085 }, 00:32:32.085 "method": "bdev_nvme_attach_controller" 00:32:32.085 } 00:32:32.085 EOF 00:32:32.085 )") 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=935153 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:32.085 { 00:32:32.085 "params": { 00:32:32.085 "name": "Nvme$subsystem", 00:32:32.085 "trtype": "$TEST_TRANSPORT", 00:32:32.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.085 "adrfam": "ipv4", 00:32:32.085 "trsvcid": "$NVMF_PORT", 00:32:32.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.085 "hdgst": ${hdgst:-false}, 00:32:32.085 "ddgst": ${ddgst:-false} 00:32:32.085 }, 00:32:32.085 "method": "bdev_nvme_attach_controller" 00:32:32.085 } 00:32:32.085 EOF 00:32:32.085 )") 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=935156 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:32.085 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:32.086 { 00:32:32.086 "params": { 00:32:32.086 "name": "Nvme$subsystem", 00:32:32.086 "trtype": "$TEST_TRANSPORT", 00:32:32.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.086 "adrfam": "ipv4", 00:32:32.086 "trsvcid": "$NVMF_PORT", 00:32:32.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.086 "hdgst": ${hdgst:-false}, 00:32:32.086 "ddgst": ${ddgst:-false} 00:32:32.086 }, 00:32:32.086 "method": "bdev_nvme_attach_controller" 00:32:32.086 } 00:32:32.086 EOF 00:32:32.086 )") 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:32.086 { 00:32:32.086 "params": { 00:32:32.086 "name": "Nvme$subsystem", 00:32:32.086 "trtype": "$TEST_TRANSPORT", 00:32:32.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.086 "adrfam": "ipv4", 00:32:32.086 "trsvcid": "$NVMF_PORT", 00:32:32.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.086 "hdgst": ${hdgst:-false}, 00:32:32.086 "ddgst": ${ddgst:-false} 00:32:32.086 }, 00:32:32.086 "method": "bdev_nvme_attach_controller" 00:32:32.086 } 00:32:32.086 EOF 00:32:32.086 )") 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 935149 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:32.086 "params": { 00:32:32.086 "name": "Nvme1", 00:32:32.086 "trtype": "tcp", 00:32:32.086 "traddr": "10.0.0.2", 00:32:32.086 "adrfam": "ipv4", 00:32:32.086 "trsvcid": "4420", 00:32:32.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:32.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:32.086 "hdgst": false, 00:32:32.086 "ddgst": false 00:32:32.086 }, 00:32:32.086 "method": "bdev_nvme_attach_controller" 00:32:32.086 }' 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:32.086 "params": { 00:32:32.086 "name": "Nvme1", 00:32:32.086 "trtype": "tcp", 00:32:32.086 "traddr": "10.0.0.2", 00:32:32.086 "adrfam": "ipv4", 00:32:32.086 "trsvcid": "4420", 00:32:32.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:32.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:32.086 "hdgst": false, 00:32:32.086 "ddgst": false 00:32:32.086 }, 00:32:32.086 "method": "bdev_nvme_attach_controller" 00:32:32.086 }' 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:32.086 "params": { 00:32:32.086 "name": "Nvme1", 00:32:32.086 "trtype": "tcp", 00:32:32.086 "traddr": "10.0.0.2", 00:32:32.086 "adrfam": "ipv4", 00:32:32.086 "trsvcid": "4420", 00:32:32.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:32.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:32.086 "hdgst": false, 00:32:32.086 "ddgst": false 00:32:32.086 }, 00:32:32.086 "method": "bdev_nvme_attach_controller" 00:32:32.086 }' 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:32.086 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:32.086 "params": { 00:32:32.086 "name": "Nvme1", 00:32:32.086 "trtype": "tcp", 00:32:32.086 "traddr": "10.0.0.2", 00:32:32.086 "adrfam": "ipv4", 00:32:32.086 "trsvcid": "4420", 00:32:32.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:32.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:32.086 "hdgst": false, 00:32:32.086 "ddgst": false 00:32:32.086 }, 00:32:32.086 "method": "bdev_nvme_attach_controller" 00:32:32.086 }' 00:32:32.086 [2024-11-20 09:17:57.548959] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:32:32.086 [2024-11-20 09:17:57.549031] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:32.086 [2024-11-20 09:17:57.551064] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:32:32.086 [2024-11-20 09:17:57.551129] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:32.086 [2024-11-20 09:17:57.553643] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:32:32.086 [2024-11-20 09:17:57.553710] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:32.086 [2024-11-20 09:17:57.555000] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:32:32.086 [2024-11-20 09:17:57.555060] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:32.382 [2024-11-20 09:17:57.769417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.382 [2024-11-20 09:17:57.810288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:32.382 [2024-11-20 09:17:57.858839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.650 [2024-11-20 09:17:57.900978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:32.650 [2024-11-20 09:17:57.926894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.650 [2024-11-20 09:17:57.967126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:32.650 [2024-11-20 09:17:57.999055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.650 [2024-11-20 09:17:58.036662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:32.650 Running I/O for 1 seconds... 00:32:32.650 Running I/O for 1 seconds... 00:32:32.650 Running I/O for 1 seconds... 00:32:32.910 Running I/O for 1 seconds... 00:32:33.855 12981.00 IOPS, 50.71 MiB/s 00:32:33.855 Latency(us) 00:32:33.855 [2024-11-20T08:17:59.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.855 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:33.855 Nvme1n1 : 1.01 13027.28 50.89 0.00 0.00 9793.37 4860.59 12451.84 00:32:33.855 [2024-11-20T08:17:59.384Z] =================================================================================================================== 00:32:33.855 [2024-11-20T08:17:59.384Z] Total : 13027.28 50.89 0.00 0.00 9793.37 4860.59 12451.84 00:32:33.855 6780.00 IOPS, 26.48 MiB/s 00:32:33.855 Latency(us) 00:32:33.855 [2024-11-20T08:17:59.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.855 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:33.855 Nvme1n1 : 1.02 6806.88 26.59 0.00 0.00 18671.56 2252.80 29054.29 00:32:33.855 [2024-11-20T08:17:59.384Z] =================================================================================================================== 00:32:33.855 [2024-11-20T08:17:59.384Z] Total : 6806.88 26.59 0.00 0.00 18671.56 2252.80 29054.29 00:32:33.855 188072.00 IOPS, 734.66 MiB/s 00:32:33.855 Latency(us) 00:32:33.855 [2024-11-20T08:17:59.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.855 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:33.855 Nvme1n1 : 1.00 187701.22 733.21 0.00 0.00 678.18 302.08 1979.73 00:32:33.855 [2024-11-20T08:17:59.384Z] =================================================================================================================== 00:32:33.855 [2024-11-20T08:17:59.384Z] Total : 187701.22 733.21 0.00 0.00 678.18 302.08 1979.73 00:32:33.855 7264.00 IOPS, 28.38 MiB/s 00:32:33.855 Latency(us) 00:32:33.855 [2024-11-20T08:17:59.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.855 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:33.855 Nvme1n1 : 1.01 7387.18 28.86 0.00 0.00 17279.38 4014.08 39103.15 00:32:33.855 [2024-11-20T08:17:59.384Z] =================================================================================================================== 00:32:33.855 [2024-11-20T08:17:59.384Z] Total : 7387.18 28.86 0.00 0.00 17279.38 4014.08 39103.15 00:32:33.855 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 935151 00:32:33.855 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 935153 00:32:33.855 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 935156 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:34.116 rmmod nvme_tcp 00:32:34.116 rmmod nvme_fabrics 00:32:34.116 rmmod nvme_keyring 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 935092 ']' 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 935092 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 935092 ']' 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 935092 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 935092 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 935092' 00:32:34.116 killing process with pid 935092 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 935092 00:32:34.116 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 935092 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.377 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.289 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:36.289 00:32:36.289 real 0m13.150s 00:32:36.289 user 0m15.765s 00:32:36.289 sys 0m7.744s 00:32:36.289 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:36.289 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:36.289 ************************************ 00:32:36.289 END TEST nvmf_bdev_io_wait 00:32:36.289 ************************************ 00:32:36.551 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:36.551 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:36.551 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:36.551 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:36.551 ************************************ 00:32:36.551 START TEST nvmf_queue_depth 00:32:36.551 ************************************ 00:32:36.551 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:36.551 * Looking for test storage... 00:32:36.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:36.551 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:36.551 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:32:36.551 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:36.551 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:36.812 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:36.812 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:36.812 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:36.812 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:36.812 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:36.812 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:36.812 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:36.812 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:36.812 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:36.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.813 --rc genhtml_branch_coverage=1 00:32:36.813 --rc genhtml_function_coverage=1 00:32:36.813 --rc genhtml_legend=1 00:32:36.813 --rc geninfo_all_blocks=1 00:32:36.813 --rc geninfo_unexecuted_blocks=1 00:32:36.813 00:32:36.813 ' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:36.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.813 --rc genhtml_branch_coverage=1 00:32:36.813 --rc genhtml_function_coverage=1 00:32:36.813 --rc genhtml_legend=1 00:32:36.813 --rc geninfo_all_blocks=1 00:32:36.813 --rc geninfo_unexecuted_blocks=1 00:32:36.813 00:32:36.813 ' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:36.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.813 --rc genhtml_branch_coverage=1 00:32:36.813 --rc genhtml_function_coverage=1 00:32:36.813 --rc genhtml_legend=1 00:32:36.813 --rc geninfo_all_blocks=1 00:32:36.813 --rc geninfo_unexecuted_blocks=1 00:32:36.813 00:32:36.813 ' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:36.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.813 --rc genhtml_branch_coverage=1 00:32:36.813 --rc genhtml_function_coverage=1 00:32:36.813 --rc genhtml_legend=1 00:32:36.813 --rc geninfo_all_blocks=1 00:32:36.813 --rc geninfo_unexecuted_blocks=1 00:32:36.813 00:32:36.813 ' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:36.813 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:36.814 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:36.814 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:44.951 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:44.951 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:44.951 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.951 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:44.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:44.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:44.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:32:44.952 00:32:44.952 --- 10.0.0.2 ping statistics --- 00:32:44.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.952 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:44.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:44.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:32:44.952 00:32:44.952 --- 10.0.0.1 ping statistics --- 00:32:44.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.952 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=940161 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 940161 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 940161 ']' 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.952 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.952 [2024-11-20 09:18:09.722873] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:44.952 [2024-11-20 09:18:09.724111] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:32:44.952 [2024-11-20 09:18:09.724172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.952 [2024-11-20 09:18:09.826474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.952 [2024-11-20 09:18:09.876630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:44.952 [2024-11-20 09:18:09.876679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:44.952 [2024-11-20 09:18:09.876688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:44.952 [2024-11-20 09:18:09.876695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:44.952 [2024-11-20 09:18:09.876702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:44.952 [2024-11-20 09:18:09.877485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.952 [2024-11-20 09:18:09.954987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:44.952 [2024-11-20 09:18:09.955292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:45.213 [2024-11-20 09:18:10.574349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:45.213 Malloc0 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:45.213 [2024-11-20 09:18:10.658538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=940481 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 940481 /var/tmp/bdevperf.sock 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 940481 ']' 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:45.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.213 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:45.213 [2024-11-20 09:18:10.717454] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:32:45.213 [2024-11-20 09:18:10.717525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940481 ] 00:32:45.474 [2024-11-20 09:18:10.809076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.474 [2024-11-20 09:18:10.863403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.044 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.044 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:46.044 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:46.044 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.044 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:46.304 NVMe0n1 00:32:46.304 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.304 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:46.564 Running I/O for 10 seconds... 00:32:48.449 8578.00 IOPS, 33.51 MiB/s [2024-11-20T08:18:14.919Z] 8836.50 IOPS, 34.52 MiB/s [2024-11-20T08:18:16.300Z] 9525.67 IOPS, 37.21 MiB/s [2024-11-20T08:18:17.241Z] 10490.75 IOPS, 40.98 MiB/s [2024-11-20T08:18:18.181Z] 11113.40 IOPS, 43.41 MiB/s [2024-11-20T08:18:19.124Z] 11582.83 IOPS, 45.25 MiB/s [2024-11-20T08:18:20.065Z] 11859.43 IOPS, 46.33 MiB/s [2024-11-20T08:18:21.007Z] 12115.25 IOPS, 47.33 MiB/s [2024-11-20T08:18:21.947Z] 12286.89 IOPS, 48.00 MiB/s [2024-11-20T08:18:21.947Z] 12427.10 IOPS, 48.54 MiB/s 00:32:56.418 Latency(us) 00:32:56.418 [2024-11-20T08:18:21.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.418 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:56.418 Verification LBA range: start 0x0 length 0x4000 00:32:56.418 NVMe0n1 : 10.04 12458.28 48.67 0.00 0.00 81911.07 10212.69 69905.07 00:32:56.418 [2024-11-20T08:18:21.947Z] =================================================================================================================== 00:32:56.418 [2024-11-20T08:18:21.947Z] Total : 12458.28 48.67 0.00 0.00 81911.07 10212.69 69905.07 00:32:56.418 { 00:32:56.418 "results": [ 00:32:56.418 { 00:32:56.418 "job": "NVMe0n1", 00:32:56.418 "core_mask": "0x1", 00:32:56.418 "workload": "verify", 00:32:56.418 "status": "finished", 00:32:56.418 "verify_range": { 00:32:56.418 "start": 0, 00:32:56.418 "length": 16384 00:32:56.418 }, 00:32:56.418 "queue_depth": 1024, 00:32:56.418 "io_size": 4096, 00:32:56.418 "runtime": 10.044565, 00:32:56.418 "iops": 12458.279676621138, 00:32:56.418 "mibps": 48.66515498680132, 00:32:56.418 "io_failed": 0, 00:32:56.418 "io_timeout": 0, 00:32:56.418 "avg_latency_us": 81911.0733660439, 00:32:56.418 "min_latency_us": 10212.693333333333, 00:32:56.418 "max_latency_us": 69905.06666666667 00:32:56.418 } 00:32:56.418 ], 00:32:56.418 "core_count": 1 00:32:56.418 } 00:32:56.678 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 940481 00:32:56.678 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 940481 ']' 00:32:56.678 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 940481 00:32:56.678 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:56.678 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:56.678 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 940481 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 940481' 00:32:56.678 killing process with pid 940481 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 940481 00:32:56.678 Received shutdown signal, test time was about 10.000000 seconds 00:32:56.678 00:32:56.678 Latency(us) 00:32:56.678 [2024-11-20T08:18:22.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.678 [2024-11-20T08:18:22.207Z] =================================================================================================================== 00:32:56.678 [2024-11-20T08:18:22.207Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 940481 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:56.678 rmmod nvme_tcp 00:32:56.678 rmmod nvme_fabrics 00:32:56.678 rmmod nvme_keyring 00:32:56.678 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:56.679 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:56.679 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:56.679 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 940161 ']' 00:32:56.679 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 940161 00:32:56.679 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 940161 ']' 00:32:56.679 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 940161 00:32:56.679 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:56.679 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:56.679 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 940161 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 940161' 00:32:56.939 killing process with pid 940161 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 940161 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 940161 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:56.939 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.481 00:32:59.481 real 0m22.575s 00:32:59.481 user 0m24.861s 00:32:59.481 sys 0m7.421s 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:59.481 ************************************ 00:32:59.481 END TEST nvmf_queue_depth 00:32:59.481 ************************************ 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:59.481 ************************************ 00:32:59.481 START TEST nvmf_target_multipath 00:32:59.481 ************************************ 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:59.481 * Looking for test storage... 00:32:59.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:59.481 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:59.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.482 --rc genhtml_branch_coverage=1 00:32:59.482 --rc genhtml_function_coverage=1 00:32:59.482 --rc genhtml_legend=1 00:32:59.482 --rc geninfo_all_blocks=1 00:32:59.482 --rc geninfo_unexecuted_blocks=1 00:32:59.482 00:32:59.482 ' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:59.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.482 --rc genhtml_branch_coverage=1 00:32:59.482 --rc genhtml_function_coverage=1 00:32:59.482 --rc genhtml_legend=1 00:32:59.482 --rc geninfo_all_blocks=1 00:32:59.482 --rc geninfo_unexecuted_blocks=1 00:32:59.482 00:32:59.482 ' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:59.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.482 --rc genhtml_branch_coverage=1 00:32:59.482 --rc genhtml_function_coverage=1 00:32:59.482 --rc genhtml_legend=1 00:32:59.482 --rc geninfo_all_blocks=1 00:32:59.482 --rc geninfo_unexecuted_blocks=1 00:32:59.482 00:32:59.482 ' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:59.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.482 --rc genhtml_branch_coverage=1 00:32:59.482 --rc genhtml_function_coverage=1 00:32:59.482 --rc genhtml_legend=1 00:32:59.482 --rc geninfo_all_blocks=1 00:32:59.482 --rc geninfo_unexecuted_blocks=1 00:32:59.482 00:32:59.482 ' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:59.482 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:07.619 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:07.619 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:07.619 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.619 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:07.620 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:07.620 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:07.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:33:07.620 00:33:07.620 --- 10.0.0.2 ping statistics --- 00:33:07.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.620 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:07.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:33:07.620 00:33:07.620 --- 10.0.0.1 ping statistics --- 00:33:07.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.620 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:07.620 only one NIC for nvmf test 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.620 rmmod nvme_tcp 00:33:07.620 rmmod nvme_fabrics 00:33:07.620 rmmod nvme_keyring 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.620 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.005 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.006 00:33:09.006 real 0m9.978s 00:33:09.006 user 0m2.223s 00:33:09.006 sys 0m5.704s 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.006 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:09.006 ************************************ 00:33:09.006 END TEST nvmf_target_multipath 00:33:09.006 ************************************ 00:33:09.266 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:09.266 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:09.266 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.266 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:09.266 ************************************ 00:33:09.266 START TEST nvmf_zcopy 00:33:09.266 ************************************ 00:33:09.266 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:09.266 * Looking for test storage... 00:33:09.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:09.266 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:09.266 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:09.266 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.267 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:09.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.528 --rc genhtml_branch_coverage=1 00:33:09.528 --rc genhtml_function_coverage=1 00:33:09.528 --rc genhtml_legend=1 00:33:09.528 --rc geninfo_all_blocks=1 00:33:09.528 --rc geninfo_unexecuted_blocks=1 00:33:09.528 00:33:09.528 ' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:09.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.528 --rc genhtml_branch_coverage=1 00:33:09.528 --rc genhtml_function_coverage=1 00:33:09.528 --rc genhtml_legend=1 00:33:09.528 --rc geninfo_all_blocks=1 00:33:09.528 --rc geninfo_unexecuted_blocks=1 00:33:09.528 00:33:09.528 ' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:09.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.528 --rc genhtml_branch_coverage=1 00:33:09.528 --rc genhtml_function_coverage=1 00:33:09.528 --rc genhtml_legend=1 00:33:09.528 --rc geninfo_all_blocks=1 00:33:09.528 --rc geninfo_unexecuted_blocks=1 00:33:09.528 00:33:09.528 ' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:09.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.528 --rc genhtml_branch_coverage=1 00:33:09.528 --rc genhtml_function_coverage=1 00:33:09.528 --rc genhtml_legend=1 00:33:09.528 --rc geninfo_all_blocks=1 00:33:09.528 --rc geninfo_unexecuted_blocks=1 00:33:09.528 00:33:09.528 ' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.528 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:17.670 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:17.670 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:17.670 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:17.670 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:17.670 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.671 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:17.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:33:17.671 00:33:17.671 --- 10.0.0.2 ping statistics --- 00:33:17.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.671 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:33:17.671 00:33:17.671 --- 10.0.0.1 ping statistics --- 00:33:17.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.671 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=951097 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 951097 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 951097 ']' 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.671 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.671 [2024-11-20 09:18:42.350888] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:17.671 [2024-11-20 09:18:42.352036] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:33:17.671 [2024-11-20 09:18:42.352089] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.671 [2024-11-20 09:18:42.451712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.671 [2024-11-20 09:18:42.501874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.671 [2024-11-20 09:18:42.501929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.671 [2024-11-20 09:18:42.501937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.671 [2024-11-20 09:18:42.501944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.671 [2024-11-20 09:18:42.501951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.671 [2024-11-20 09:18:42.502703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.671 [2024-11-20 09:18:42.578413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:17.671 [2024-11-20 09:18:42.578711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:17.671 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.671 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:17.671 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:17.671 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:17.671 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.932 [2024-11-20 09:18:43.215571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.932 [2024-11-20 09:18:43.243886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.932 malloc0 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:17.932 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:17.932 { 00:33:17.932 "params": { 00:33:17.932 "name": "Nvme$subsystem", 00:33:17.932 "trtype": "$TEST_TRANSPORT", 00:33:17.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.932 "adrfam": "ipv4", 00:33:17.932 "trsvcid": "$NVMF_PORT", 00:33:17.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.932 "hdgst": ${hdgst:-false}, 00:33:17.933 "ddgst": ${ddgst:-false} 00:33:17.933 }, 00:33:17.933 "method": "bdev_nvme_attach_controller" 00:33:17.933 } 00:33:17.933 EOF 00:33:17.933 )") 00:33:17.933 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:17.933 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:17.933 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:17.933 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:17.933 "params": { 00:33:17.933 "name": "Nvme1", 00:33:17.933 "trtype": "tcp", 00:33:17.933 "traddr": "10.0.0.2", 00:33:17.933 "adrfam": "ipv4", 00:33:17.933 "trsvcid": "4420", 00:33:17.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:17.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:17.933 "hdgst": false, 00:33:17.933 "ddgst": false 00:33:17.933 }, 00:33:17.933 "method": "bdev_nvme_attach_controller" 00:33:17.933 }' 00:33:17.933 [2024-11-20 09:18:43.348233] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:33:17.933 [2024-11-20 09:18:43.348300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951242 ] 00:33:17.933 [2024-11-20 09:18:43.441155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.193 [2024-11-20 09:18:43.494735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.193 Running I/O for 10 seconds... 00:33:20.147 6387.00 IOPS, 49.90 MiB/s [2024-11-20T08:18:47.059Z] 6448.50 IOPS, 50.38 MiB/s [2024-11-20T08:18:48.003Z] 6470.00 IOPS, 50.55 MiB/s [2024-11-20T08:18:48.944Z] 6474.00 IOPS, 50.58 MiB/s [2024-11-20T08:18:49.883Z] 6710.20 IOPS, 52.42 MiB/s [2024-11-20T08:18:50.823Z] 7202.50 IOPS, 56.27 MiB/s [2024-11-20T08:18:51.764Z] 7551.57 IOPS, 59.00 MiB/s [2024-11-20T08:18:52.706Z] 7815.62 IOPS, 61.06 MiB/s [2024-11-20T08:18:54.088Z] 8021.56 IOPS, 62.67 MiB/s [2024-11-20T08:18:54.088Z] 8186.00 IOPS, 63.95 MiB/s 00:33:28.559 Latency(us) 00:33:28.559 [2024-11-20T08:18:54.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.559 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:28.559 Verification LBA range: start 0x0 length 0x1000 00:33:28.559 Nvme1n1 : 10.05 8156.50 63.72 0.00 0.00 15590.12 2990.08 45438.29 00:33:28.559 [2024-11-20T08:18:54.088Z] =================================================================================================================== 00:33:28.559 [2024-11-20T08:18:54.088Z] Total : 8156.50 63.72 0.00 0.00 15590.12 2990.08 45438.29 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=953147 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:28.559 { 00:33:28.559 "params": { 00:33:28.559 "name": "Nvme$subsystem", 00:33:28.559 "trtype": "$TEST_TRANSPORT", 00:33:28.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.559 "adrfam": "ipv4", 00:33:28.559 "trsvcid": "$NVMF_PORT", 00:33:28.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.559 "hdgst": ${hdgst:-false}, 00:33:28.559 "ddgst": ${ddgst:-false} 00:33:28.559 }, 00:33:28.559 "method": "bdev_nvme_attach_controller" 00:33:28.559 } 00:33:28.559 EOF 00:33:28.559 )") 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:28.559 [2024-11-20 09:18:53.843103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.843130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:28.559 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:28.559 "params": { 00:33:28.559 "name": "Nvme1", 00:33:28.559 "trtype": "tcp", 00:33:28.559 "traddr": "10.0.0.2", 00:33:28.559 "adrfam": "ipv4", 00:33:28.559 "trsvcid": "4420", 00:33:28.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:28.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:28.559 "hdgst": false, 00:33:28.559 "ddgst": false 00:33:28.559 }, 00:33:28.559 "method": "bdev_nvme_attach_controller" 00:33:28.559 }' 00:33:28.559 [2024-11-20 09:18:53.855071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.855080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.867069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.867077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.879070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.879077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.886010] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:33:28.559 [2024-11-20 09:18:53.886057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953147 ] 00:33:28.559 [2024-11-20 09:18:53.891069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.891077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.903069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.903077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.915070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.915078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.927069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.927077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.939069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.939077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.951070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.951077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.963069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.963077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.969467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.559 [2024-11-20 09:18:53.975071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.975079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.987070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.987079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:53.998702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.559 [2024-11-20 09:18:53.999069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:53.999078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:54.011074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:54.011083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:54.023074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:54.023090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:54.035071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:54.035081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:54.047070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:54.047080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:54.059070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:54.059079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:54.071078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:54.071095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.559 [2024-11-20 09:18:54.083073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.559 [2024-11-20 09:18:54.083083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.095146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.095156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.107071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.107081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.119932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.119946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.131072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.131085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 Running I/O for 5 seconds... 00:33:28.819 [2024-11-20 09:18:54.146627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.146644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.160109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.160126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.174950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.174968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.188255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.188270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.202556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.202572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.215803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.215819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.231067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.231083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.244103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.244119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.258634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.258649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.271780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.271794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.286523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.286539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.299695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.299711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.314294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.314310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.327533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.327549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.819 [2024-11-20 09:18:54.342124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.819 [2024-11-20 09:18:54.342139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.355606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.079 [2024-11-20 09:18:54.355622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.370794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.079 [2024-11-20 09:18:54.370809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.384228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.079 [2024-11-20 09:18:54.384243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.399176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.079 [2024-11-20 09:18:54.399192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.411449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.079 [2024-11-20 09:18:54.411464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.426774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.079 [2024-11-20 09:18:54.426790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.440375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.079 [2024-11-20 09:18:54.440390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.454490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.079 [2024-11-20 09:18:54.454505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.467854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.079 [2024-11-20 09:18:54.467870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.482551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.079 [2024-11-20 09:18:54.482566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.495914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.079 [2024-11-20 09:18:54.495929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.079 [2024-11-20 09:18:54.510098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.080 [2024-11-20 09:18:54.510113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.080 [2024-11-20 09:18:54.523250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.080 [2024-11-20 09:18:54.523265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.080 [2024-11-20 09:18:54.535536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.080 [2024-11-20 09:18:54.535551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.080 [2024-11-20 09:18:54.550587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.080 [2024-11-20 09:18:54.550603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.080 [2024-11-20 09:18:54.563786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.080 [2024-11-20 09:18:54.563801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.080 [2024-11-20 09:18:54.578819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.080 [2024-11-20 09:18:54.578834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.080 [2024-11-20 09:18:54.591766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.080 [2024-11-20 09:18:54.591782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.606652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.341 [2024-11-20 09:18:54.606668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.620090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.341 [2024-11-20 09:18:54.620104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.634254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.341 [2024-11-20 09:18:54.634269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.647427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.341 [2024-11-20 09:18:54.647445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.662572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.341 [2024-11-20 09:18:54.662587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.675981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.341 [2024-11-20 09:18:54.675996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.690530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.341 [2024-11-20 09:18:54.690546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.703825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.341 [2024-11-20 09:18:54.703843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.718924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.341 [2024-11-20 09:18:54.718939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.732034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.341 [2024-11-20 09:18:54.732049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.746550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.341 [2024-11-20 09:18:54.746565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.341 [2024-11-20 09:18:54.759620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.342 [2024-11-20 09:18:54.759634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.342 [2024-11-20 09:18:54.774066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.342 [2024-11-20 09:18:54.774081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.342 [2024-11-20 09:18:54.787165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.342 [2024-11-20 09:18:54.787181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.342 [2024-11-20 09:18:54.800092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.342 [2024-11-20 09:18:54.800107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.342 [2024-11-20 09:18:54.814189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.342 [2024-11-20 09:18:54.814204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.342 [2024-11-20 09:18:54.827116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.342 [2024-11-20 09:18:54.827131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.342 [2024-11-20 09:18:54.839951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.342 [2024-11-20 09:18:54.839966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.342 [2024-11-20 09:18:54.854670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.342 [2024-11-20 09:18:54.854686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:54.867914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:54.867930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:54.882737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:54.882752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:54.895697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:54.895712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:54.910357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:54.910381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:54.923666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:54.923681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:54.938543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:54.938558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:54.952001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:54.952016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:54.966027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:54.966042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:54.978899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:54.978914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:54.991795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:54.991810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:55.006495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:55.006510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:55.019758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:55.019773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:55.034248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:55.034263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:55.047434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:55.047449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.602 [2024-11-20 09:18:55.062694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.602 [2024-11-20 09:18:55.062709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.603 [2024-11-20 09:18:55.075951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.603 [2024-11-20 09:18:55.075966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.603 [2024-11-20 09:18:55.090813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.603 [2024-11-20 09:18:55.090829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.603 [2024-11-20 09:18:55.104110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.603 [2024-11-20 09:18:55.104125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.603 [2024-11-20 09:18:55.118162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.603 [2024-11-20 09:18:55.118178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.131424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.131439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 18741.00 IOPS, 146.41 MiB/s [2024-11-20T08:18:55.392Z] [2024-11-20 09:18:55.146150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.146170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.159457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.159472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.174186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.174205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.187289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.187305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.199446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.199460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.213726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.213742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.227009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.227026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.239924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.239940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.254888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.254903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.268261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.268276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.282298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.282314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.295585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.295599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.310522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.310537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.323780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.323794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.863 [2024-11-20 09:18:55.338270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.863 [2024-11-20 09:18:55.338285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.864 [2024-11-20 09:18:55.351511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.864 [2024-11-20 09:18:55.351525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.864 [2024-11-20 09:18:55.366661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.864 [2024-11-20 09:18:55.366676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.864 [2024-11-20 09:18:55.379981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.864 [2024-11-20 09:18:55.379995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.394148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.394168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.407294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.407308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.420344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.420359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.434857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.434873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.448134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.448149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.462469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.462484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.475910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.475925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.490183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.490198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.503294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.503309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.515500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.515514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.530125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.530140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.543455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.543469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.558217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.558232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.124 [2024-11-20 09:18:55.571322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.124 [2024-11-20 09:18:55.571337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.125 [2024-11-20 09:18:55.583397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.125 [2024-11-20 09:18:55.583412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.125 [2024-11-20 09:18:55.597718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.125 [2024-11-20 09:18:55.597734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.125 [2024-11-20 09:18:55.610936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.125 [2024-11-20 09:18:55.610952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.125 [2024-11-20 09:18:55.623920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.125 [2024-11-20 09:18:55.623935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.125 [2024-11-20 09:18:55.638342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.125 [2024-11-20 09:18:55.638357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.651546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.651563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.666492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.666507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.679651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.679666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.694355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.694371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.707684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.707699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.721833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.721849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.734884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.734900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.747896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.747911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.762930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.762946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.776187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.776202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.790871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.790887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.804267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.804283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.818681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.818696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.831994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.832009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.846379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.846394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.859545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.859561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.874439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.874455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.888124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.888140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.391 [2024-11-20 09:18:55.902716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.391 [2024-11-20 09:18:55.902732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:55.916034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:55.916050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:55.929952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:55.929967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:55.943293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:55.943308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:55.955779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:55.955794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:55.969900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:55.969915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:55.983199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:55.983214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:55.995725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:55.995741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.010863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.010879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.024227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.024242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.037917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.037931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.051068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.051084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.063461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.063476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.078461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.078477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.091705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.091720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.106052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.106067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.119386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.119401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.133970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.133988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 18749.00 IOPS, 146.48 MiB/s [2024-11-20T08:18:56.219Z] [2024-11-20 09:18:56.147541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.147557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.162266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.162281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.175680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.175695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.190204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.190219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.690 [2024-11-20 09:18:56.203517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.690 [2024-11-20 09:18:56.203536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.218408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.218424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.231732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.231748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.246675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.246691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.260092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.260107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.274342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.274358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.287519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.287533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.302217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.302232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.315607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.315622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.330493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.330508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.343315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.343330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.356209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.356225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.370614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.370629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.383700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.383715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.398040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.398055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.411465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.411480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.425992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.426007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.439007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.439023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.452292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.452307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.466708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.466727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.480032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.480048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.986 [2024-11-20 09:18:56.494010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.986 [2024-11-20 09:18:56.494026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.507181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.507197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.520403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.520418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.534497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.534513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.547583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.547598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.562297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.562312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.575631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.575646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.590495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.590511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.603640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.603654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.618069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.618084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.631628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.631644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.646462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.646478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.659887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.659902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.674625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.674640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.688475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.688490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.702510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.702525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.715656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.715671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.730992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.731011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.744301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.744316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.758493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.758507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.258 [2024-11-20 09:18:56.772112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.258 [2024-11-20 09:18:56.772127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.518 [2024-11-20 09:18:56.786675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.786690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.800121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.800136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.814444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.814459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.827540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.827555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.842288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.842303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.855518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.855533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.870078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.870093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.883366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.883381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.898240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.898255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.911561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.911575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.926268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.926283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.939641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.939656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.954281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.954297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.967469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.967483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.982305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.982320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:56.995650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:56.995668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:57.010466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:57.010481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:57.023597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:57.023611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.519 [2024-11-20 09:18:57.038240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.519 [2024-11-20 09:18:57.038255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.051450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.051468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.066644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.066660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.080005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.080020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.094103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.094118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.107061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.107076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.119610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.119625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.134051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.134067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 18778.00 IOPS, 146.70 MiB/s [2024-11-20T08:18:57.309Z] [2024-11-20 09:18:57.147064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.147080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.160126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.160141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.174222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.174237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.187488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.187502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.202260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.202275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.215305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.215320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.228213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.228228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.242228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.242244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.255677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.255692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.270215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.270230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.283593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.283608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 [2024-11-20 09:18:57.298802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-20 09:18:57.298818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.041 [2024-11-20 09:18:57.312291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.041 [2024-11-20 09:18:57.312308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.041 [2024-11-20 09:18:57.326508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.041 [2024-11-20 09:18:57.326524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.041 [2024-11-20 09:18:57.339373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.041 [2024-11-20 09:18:57.339388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.354131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.354147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.367057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.367072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.380296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.380311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.395086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.395101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.408189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.408204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.422723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.422738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.435764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.435779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.450451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.450467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.463295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.463311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.475528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.475543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.490627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.490643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.504026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.504042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.518287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.518302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.531400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.531415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.546551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.546566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.042 [2024-11-20 09:18:57.559790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.042 [2024-11-20 09:18:57.559805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.574262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.574278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.587440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.587455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.601885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.601900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.615186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.615202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.628320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.628336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.642427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.642443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.655195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.655211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.668019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.668034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.682387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.682402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.695599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.695613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.710412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.710427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.723831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.723846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.738251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.738267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.751503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.751518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.766224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.766245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.779682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.779697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.794329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.794345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.807752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.807767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.303 [2024-11-20 09:18:57.822392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.303 [2024-11-20 09:18:57.822409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.835666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.835682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.850578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.850594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.863979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.863995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.878187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.878203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.891720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.891734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.906622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.906637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.920004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.920019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.934081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.934100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.947252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.947268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.960268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.960283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.974267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.974282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:57.987314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:57.987329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:58.000414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:58.000429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:58.014450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:58.014466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:58.027785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:58.027804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:58.042565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:58.042580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:58.055893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:58.055908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:58.070314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:58.070329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.564 [2024-11-20 09:18:58.083326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.564 [2024-11-20 09:18:58.083341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.096592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.096608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.110141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.110156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.123195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.123210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.136401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.136416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 18795.50 IOPS, 146.84 MiB/s [2024-11-20T08:18:58.354Z] [2024-11-20 09:18:58.150353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.150367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.163432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.163447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.178386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.178402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.191189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.191204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.204311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.204325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.218288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.218303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.231661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.231676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.246381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.246397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.259368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.259383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.274236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.274251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.287466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.287485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.301973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.301988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.315124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.315139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.327140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.327154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.825 [2024-11-20 09:18:58.340417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.825 [2024-11-20 09:18:58.340432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.355081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.355098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.368139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.368155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.382296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.382311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.395857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.395872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.410592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.410608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.424018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.424034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.438912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.438928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.452239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.452254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.466613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.466628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.479705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.479720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.494698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.494713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.507734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.507749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.522690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.522705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.535838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.535854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.550128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.550143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.563551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.091 [2024-11-20 09:18:58.563566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.091 [2024-11-20 09:18:58.578456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-11-20 09:18:58.578471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-11-20 09:18:58.591551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-11-20 09:18:58.591566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-11-20 09:18:58.606446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-11-20 09:18:58.606463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.619317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.619332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.631713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.631728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.646155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.646174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.659343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.659357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.674164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.674179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.687325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.687340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.699551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.699566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.714481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.714497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.727945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.727960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.742813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.742828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.755932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.755946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.770195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.770210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.783181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.783196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.796387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.362 [2024-11-20 09:18:58.796402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 [2024-11-20 09:18:58.810198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.363 [2024-11-20 09:18:58.810214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.363 [2024-11-20 09:18:58.823207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.363 [2024-11-20 09:18:58.823222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.363 [2024-11-20 09:18:58.836057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.363 [2024-11-20 09:18:58.836072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.363 [2024-11-20 09:18:58.850423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.363 [2024-11-20 09:18:58.850439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.363 [2024-11-20 09:18:58.863698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.363 [2024-11-20 09:18:58.863712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.363 [2024-11-20 09:18:58.878599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.363 [2024-11-20 09:18:58.878614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:58.891865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:58.891880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:58.906241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:58.906256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:58.919517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:58.919532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:58.934394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:58.934409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:58.947564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:58.947579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:58.962093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:58.962109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:58.974802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:58.974817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:58.987721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:58.987736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:59.002823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:59.002838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:59.016168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:59.016183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:59.030989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:59.031005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.623 [2024-11-20 09:18:59.044106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.623 [2024-11-20 09:18:59.044121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.624 [2024-11-20 09:18:59.058334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.624 [2024-11-20 09:18:59.058349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.624 [2024-11-20 09:18:59.071325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.624 [2024-11-20 09:18:59.071341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.624 [2024-11-20 09:18:59.083939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.624 [2024-11-20 09:18:59.083954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.624 [2024-11-20 09:18:59.098613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.624 [2024-11-20 09:18:59.098629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.624 [2024-11-20 09:18:59.112035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.624 [2024-11-20 09:18:59.112050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.624 [2024-11-20 09:18:59.126264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.624 [2024-11-20 09:18:59.126279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.624 [2024-11-20 09:18:59.139419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.624 [2024-11-20 09:18:59.139433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.885 18798.20 IOPS, 146.86 MiB/s [2024-11-20T08:18:59.414Z] [2024-11-20 09:18:59.151078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.885 [2024-11-20 09:18:59.151093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.885 00:33:33.885 Latency(us) 00:33:33.885 [2024-11-20T08:18:59.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.885 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:33.885 Nvme1n1 : 5.01 18799.68 146.87 0.00 0.00 6802.48 2034.35 11578.03 00:33:33.885 [2024-11-20T08:18:59.414Z] =================================================================================================================== 00:33:33.885 [2024-11-20T08:18:59.414Z] Total : 18799.68 146.87 0.00 0.00 6802.48 2034.35 11578.03 00:33:33.885 [2024-11-20 09:18:59.163075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.885 [2024-11-20 09:18:59.163088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.885 [2024-11-20 09:18:59.175080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.885 [2024-11-20 09:18:59.175093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.885 [2024-11-20 09:18:59.187076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.885 [2024-11-20 09:18:59.187089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.885 [2024-11-20 09:18:59.199074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.885 [2024-11-20 09:18:59.199085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.885 [2024-11-20 09:18:59.211072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.885 [2024-11-20 09:18:59.211082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.885 [2024-11-20 09:18:59.223070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.885 [2024-11-20 09:18:59.223078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.885 [2024-11-20 09:18:59.235072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.885 [2024-11-20 09:18:59.235081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.885 [2024-11-20 09:18:59.247070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.885 [2024-11-20 09:18:59.247078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (953147) - No such process 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 953147 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.885 delay0 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.885 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:33.885 [2024-11-20 09:18:59.410768] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:40.466 Initializing NVMe Controllers 00:33:40.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:40.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:40.466 Initialization complete. Launching workers. 00:33:40.466 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 799 00:33:40.466 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1074, failed to submit 45 00:33:40.466 success 947, unsuccessful 127, failed 0 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:40.466 rmmod nvme_tcp 00:33:40.466 rmmod nvme_fabrics 00:33:40.466 rmmod nvme_keyring 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 951097 ']' 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 951097 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 951097 ']' 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 951097 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 951097 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 951097' 00:33:40.466 killing process with pid 951097 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 951097 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 951097 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:40.466 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.008 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:43.008 00:33:43.008 real 0m33.368s 00:33:43.008 user 0m42.042s 00:33:43.008 sys 0m12.167s 00:33:43.008 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:43.008 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:43.008 ************************************ 00:33:43.008 END TEST nvmf_zcopy 00:33:43.008 ************************************ 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:43.008 ************************************ 00:33:43.008 START TEST nvmf_nmic 00:33:43.008 ************************************ 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:43.008 * Looking for test storage... 00:33:43.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:43.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.008 --rc genhtml_branch_coverage=1 00:33:43.008 --rc genhtml_function_coverage=1 00:33:43.008 --rc genhtml_legend=1 00:33:43.008 --rc geninfo_all_blocks=1 00:33:43.008 --rc geninfo_unexecuted_blocks=1 00:33:43.008 00:33:43.008 ' 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:43.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.008 --rc genhtml_branch_coverage=1 00:33:43.008 --rc genhtml_function_coverage=1 00:33:43.008 --rc genhtml_legend=1 00:33:43.008 --rc geninfo_all_blocks=1 00:33:43.008 --rc geninfo_unexecuted_blocks=1 00:33:43.008 00:33:43.008 ' 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:43.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.008 --rc genhtml_branch_coverage=1 00:33:43.008 --rc genhtml_function_coverage=1 00:33:43.008 --rc genhtml_legend=1 00:33:43.008 --rc geninfo_all_blocks=1 00:33:43.008 --rc geninfo_unexecuted_blocks=1 00:33:43.008 00:33:43.008 ' 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:43.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.008 --rc genhtml_branch_coverage=1 00:33:43.008 --rc genhtml_function_coverage=1 00:33:43.008 --rc genhtml_legend=1 00:33:43.008 --rc geninfo_all_blocks=1 00:33:43.008 --rc geninfo_unexecuted_blocks=1 00:33:43.008 00:33:43.008 ' 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.008 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:43.009 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:51.146 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:51.146 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:51.146 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:51.146 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:51.146 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:51.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:51.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:33:51.147 00:33:51.147 --- 10.0.0.2 ping statistics --- 00:33:51.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.147 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:51.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:51.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:33:51.147 00:33:51.147 --- 10.0.0.1 ping statistics --- 00:33:51.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.147 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=959655 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 959655 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 959655 ']' 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.147 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.147 [2024-11-20 09:19:15.675949] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:51.147 [2024-11-20 09:19:15.677068] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:33:51.147 [2024-11-20 09:19:15.677120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.147 [2024-11-20 09:19:15.776276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:51.147 [2024-11-20 09:19:15.830678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:51.147 [2024-11-20 09:19:15.830731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:51.147 [2024-11-20 09:19:15.830740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:51.147 [2024-11-20 09:19:15.830747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:51.147 [2024-11-20 09:19:15.830754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:51.147 [2024-11-20 09:19:15.832827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.147 [2024-11-20 09:19:15.832986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:51.147 [2024-11-20 09:19:15.833148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:51.147 [2024-11-20 09:19:15.833149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.147 [2024-11-20 09:19:15.911052] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:51.147 [2024-11-20 09:19:15.912224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:51.147 [2024-11-20 09:19:15.912362] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:51.147 [2024-11-20 09:19:15.912575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:51.147 [2024-11-20 09:19:15.912666] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.147 [2024-11-20 09:19:16.514138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.147 Malloc0 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.147 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.148 [2024-11-20 09:19:16.598422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:51.148 test case1: single bdev can't be used in multiple subsystems 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.148 [2024-11-20 09:19:16.633781] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:51.148 [2024-11-20 09:19:16.633802] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:51.148 [2024-11-20 09:19:16.633811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:51.148 request: 00:33:51.148 { 00:33:51.148 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:51.148 "namespace": { 00:33:51.148 "bdev_name": "Malloc0", 00:33:51.148 "no_auto_visible": false 00:33:51.148 }, 00:33:51.148 "method": "nvmf_subsystem_add_ns", 00:33:51.148 "req_id": 1 00:33:51.148 } 00:33:51.148 Got JSON-RPC error response 00:33:51.148 response: 00:33:51.148 { 00:33:51.148 "code": -32602, 00:33:51.148 "message": "Invalid parameters" 00:33:51.148 } 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:51.148 Adding namespace failed - expected result. 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:51.148 test case2: host connect to nvmf target in multiple paths 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.148 [2024-11-20 09:19:16.645877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.148 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:51.719 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:52.290 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:52.290 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:33:52.290 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:52.290 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:52.290 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:33:54.201 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:54.201 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:54.201 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:54.201 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:54.201 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:54.201 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:33:54.201 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:54.201 [global] 00:33:54.201 thread=1 00:33:54.201 invalidate=1 00:33:54.201 rw=write 00:33:54.201 time_based=1 00:33:54.201 runtime=1 00:33:54.201 ioengine=libaio 00:33:54.201 direct=1 00:33:54.201 bs=4096 00:33:54.201 iodepth=1 00:33:54.201 norandommap=0 00:33:54.201 numjobs=1 00:33:54.201 00:33:54.201 verify_dump=1 00:33:54.201 verify_backlog=512 00:33:54.201 verify_state_save=0 00:33:54.201 do_verify=1 00:33:54.201 verify=crc32c-intel 00:33:54.201 [job0] 00:33:54.201 filename=/dev/nvme0n1 00:33:54.201 Could not set queue depth (nvme0n1) 00:33:54.461 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.461 fio-3.35 00:33:54.461 Starting 1 thread 00:33:55.845 00:33:55.845 job0: (groupid=0, jobs=1): err= 0: pid=960667: Wed Nov 20 09:19:21 2024 00:33:55.845 read: IOPS=649, BW=2597KiB/s (2660kB/s)(2600KiB/1001msec) 00:33:55.845 slat (nsec): min=7402, max=60875, avg=23855.94, stdev=7871.88 00:33:55.845 clat (usec): min=563, max=1003, avg=803.59, stdev=73.26 00:33:55.845 lat (usec): min=571, max=1030, avg=827.45, stdev=76.86 00:33:55.845 clat percentiles (usec): 00:33:55.845 | 1.00th=[ 652], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 750], 00:33:55.845 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 824], 00:33:55.845 | 70.00th=[ 857], 80.00th=[ 873], 90.00th=[ 898], 95.00th=[ 914], 00:33:55.845 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 1004], 99.95th=[ 1004], 00:33:55.845 | 99.99th=[ 1004] 00:33:55.845 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:33:55.845 slat (nsec): min=9926, max=63736, avg=26774.70, stdev=11782.87 00:33:55.845 clat (usec): min=220, max=3247, avg=412.64, stdev=111.74 00:33:55.845 lat (usec): min=230, max=3259, avg=439.41, stdev=115.53 00:33:55.845 clat percentiles (usec): 00:33:55.845 | 1.00th=[ 241], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 338], 00:33:55.845 | 30.00th=[ 363], 40.00th=[ 404], 50.00th=[ 429], 60.00th=[ 445], 00:33:55.845 | 70.00th=[ 461], 80.00th=[ 474], 90.00th=[ 482], 95.00th=[ 502], 00:33:55.845 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 562], 99.95th=[ 3261], 00:33:55.845 | 99.99th=[ 3261] 00:33:55.845 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:55.845 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:55.845 lat (usec) : 250=0.84%, 500=57.35%, 750=10.27%, 1000=31.36% 00:33:55.845 lat (msec) : 2=0.12%, 4=0.06% 00:33:55.845 cpu : usr=2.50%, sys=4.10%, ctx=1677, majf=0, minf=1 00:33:55.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.845 issued rwts: total=650,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:55.845 00:33:55.845 Run status group 0 (all jobs): 00:33:55.845 READ: bw=2597KiB/s (2660kB/s), 2597KiB/s-2597KiB/s (2660kB/s-2660kB/s), io=2600KiB (2662kB), run=1001-1001msec 00:33:55.845 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:33:55.845 00:33:55.845 Disk stats (read/write): 00:33:55.845 nvme0n1: ios=565/1024, merge=0/0, ticks=828/422, in_queue=1250, util=99.60% 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:55.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.845 rmmod nvme_tcp 00:33:55.845 rmmod nvme_fabrics 00:33:55.845 rmmod nvme_keyring 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 959655 ']' 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 959655 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 959655 ']' 00:33:55.845 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 959655 00:33:55.846 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:33:55.846 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.846 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 959655 00:33:55.846 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:55.846 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:55.846 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 959655' 00:33:55.846 killing process with pid 959655 00:33:55.846 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 959655 00:33:55.846 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 959655 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.106 09:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.650 00:33:58.650 real 0m15.504s 00:33:58.650 user 0m31.350s 00:33:58.650 sys 0m7.349s 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.650 ************************************ 00:33:58.650 END TEST nvmf_nmic 00:33:58.650 ************************************ 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:58.650 ************************************ 00:33:58.650 START TEST nvmf_fio_target 00:33:58.650 ************************************ 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:58.650 * Looking for test storage... 00:33:58.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:58.650 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:58.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.651 --rc genhtml_branch_coverage=1 00:33:58.651 --rc genhtml_function_coverage=1 00:33:58.651 --rc genhtml_legend=1 00:33:58.651 --rc geninfo_all_blocks=1 00:33:58.651 --rc geninfo_unexecuted_blocks=1 00:33:58.651 00:33:58.651 ' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:58.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.651 --rc genhtml_branch_coverage=1 00:33:58.651 --rc genhtml_function_coverage=1 00:33:58.651 --rc genhtml_legend=1 00:33:58.651 --rc geninfo_all_blocks=1 00:33:58.651 --rc geninfo_unexecuted_blocks=1 00:33:58.651 00:33:58.651 ' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:58.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.651 --rc genhtml_branch_coverage=1 00:33:58.651 --rc genhtml_function_coverage=1 00:33:58.651 --rc genhtml_legend=1 00:33:58.651 --rc geninfo_all_blocks=1 00:33:58.651 --rc geninfo_unexecuted_blocks=1 00:33:58.651 00:33:58.651 ' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:58.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.651 --rc genhtml_branch_coverage=1 00:33:58.651 --rc genhtml_function_coverage=1 00:33:58.651 --rc genhtml_legend=1 00:33:58.651 --rc geninfo_all_blocks=1 00:33:58.651 --rc geninfo_unexecuted_blocks=1 00:33:58.651 00:33:58.651 ' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:58.651 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:05.230 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:05.230 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:05.230 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.230 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:05.231 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.231 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:05.493 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:05.494 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:05.494 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:05.494 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:05.494 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:05.494 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:05.494 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:05.494 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:05.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:34:05.495 00:34:05.495 --- 10.0.0.2 ping statistics --- 00:34:05.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.495 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:05.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:34:05.759 00:34:05.759 --- 10.0.0.1 ping statistics --- 00:34:05.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.759 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=965008 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 965008 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 965008 ']' 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.759 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:05.759 [2024-11-20 09:19:31.145621] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:05.759 [2024-11-20 09:19:31.146735] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:34:05.759 [2024-11-20 09:19:31.146787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.759 [2024-11-20 09:19:31.247486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:06.020 [2024-11-20 09:19:31.301029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:06.020 [2024-11-20 09:19:31.301085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:06.020 [2024-11-20 09:19:31.301094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:06.020 [2024-11-20 09:19:31.301101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:06.020 [2024-11-20 09:19:31.301107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:06.020 [2024-11-20 09:19:31.303117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.020 [2024-11-20 09:19:31.303275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:06.020 [2024-11-20 09:19:31.303585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:06.020 [2024-11-20 09:19:31.303588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.020 [2024-11-20 09:19:31.364663] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:06.020 [2024-11-20 09:19:31.365891] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:06.020 [2024-11-20 09:19:31.366412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:06.020 [2024-11-20 09:19:31.367067] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:06.020 [2024-11-20 09:19:31.367113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:06.590 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:06.590 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:06.590 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:06.590 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:06.590 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:06.590 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:06.590 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:06.850 [2024-11-20 09:19:32.224703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.850 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:07.109 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:07.109 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:07.368 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:07.368 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:07.368 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:07.368 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:07.628 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:07.628 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:07.887 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:08.147 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:08.147 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:08.147 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:08.147 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:08.406 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:08.406 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:08.666 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:08.666 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:08.666 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:08.925 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:08.925 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:08.925 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:09.186 [2024-11-20 09:19:34.596616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.186 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:09.445 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:09.705 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:09.966 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:09.966 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:09.966 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:09.966 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:09.966 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:09.966 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:12.507 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:12.507 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:12.507 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:12.507 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:12.507 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:12.507 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:12.507 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:12.507 [global] 00:34:12.507 thread=1 00:34:12.507 invalidate=1 00:34:12.507 rw=write 00:34:12.507 time_based=1 00:34:12.507 runtime=1 00:34:12.507 ioengine=libaio 00:34:12.507 direct=1 00:34:12.507 bs=4096 00:34:12.507 iodepth=1 00:34:12.507 norandommap=0 00:34:12.507 numjobs=1 00:34:12.507 00:34:12.507 verify_dump=1 00:34:12.507 verify_backlog=512 00:34:12.507 verify_state_save=0 00:34:12.507 do_verify=1 00:34:12.507 verify=crc32c-intel 00:34:12.507 [job0] 00:34:12.507 filename=/dev/nvme0n1 00:34:12.507 [job1] 00:34:12.507 filename=/dev/nvme0n2 00:34:12.507 [job2] 00:34:12.507 filename=/dev/nvme0n3 00:34:12.507 [job3] 00:34:12.507 filename=/dev/nvme0n4 00:34:12.507 Could not set queue depth (nvme0n1) 00:34:12.507 Could not set queue depth (nvme0n2) 00:34:12.507 Could not set queue depth (nvme0n3) 00:34:12.507 Could not set queue depth (nvme0n4) 00:34:12.507 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:12.507 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:12.507 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:12.507 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:12.507 fio-3.35 00:34:12.507 Starting 4 threads 00:34:13.889 00:34:13.889 job0: (groupid=0, jobs=1): err= 0: pid=966591: Wed Nov 20 09:19:39 2024 00:34:13.889 read: IOPS=723, BW=2893KiB/s (2963kB/s)(2896KiB/1001msec) 00:34:13.889 slat (nsec): min=6662, max=46440, avg=24916.94, stdev=7702.75 00:34:13.889 clat (usec): min=306, max=1083, avg=742.64, stdev=122.50 00:34:13.889 lat (usec): min=334, max=1110, avg=767.56, stdev=125.06 00:34:13.889 clat percentiles (usec): 00:34:13.889 | 1.00th=[ 388], 5.00th=[ 529], 10.00th=[ 570], 20.00th=[ 644], 00:34:13.889 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 783], 00:34:13.889 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 922], 00:34:13.889 | 99.00th=[ 1004], 99.50th=[ 1045], 99.90th=[ 1090], 99.95th=[ 1090], 00:34:13.889 | 99.99th=[ 1090] 00:34:13.889 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:13.889 slat (nsec): min=9256, max=71653, avg=29378.07, stdev=12275.79 00:34:13.889 clat (usec): min=131, max=1179, avg=391.97, stdev=129.11 00:34:13.889 lat (usec): min=141, max=1220, avg=421.34, stdev=133.70 00:34:13.889 clat percentiles (usec): 00:34:13.889 | 1.00th=[ 147], 5.00th=[ 204], 10.00th=[ 241], 20.00th=[ 281], 00:34:13.889 | 30.00th=[ 310], 40.00th=[ 338], 50.00th=[ 375], 60.00th=[ 416], 00:34:13.889 | 70.00th=[ 461], 80.00th=[ 510], 90.00th=[ 570], 95.00th=[ 611], 00:34:13.889 | 99.00th=[ 685], 99.50th=[ 717], 99.90th=[ 783], 99.95th=[ 1188], 00:34:13.889 | 99.99th=[ 1188] 00:34:13.889 bw ( KiB/s): min= 4096, max= 4096, per=33.93%, avg=4096.00, stdev= 0.00, samples=1 00:34:13.889 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:13.889 lat (usec) : 250=7.15%, 500=40.45%, 750=30.89%, 1000=21.00% 00:34:13.889 lat (msec) : 2=0.51% 00:34:13.889 cpu : usr=3.00%, sys=6.60%, ctx=1753, majf=0, minf=1 00:34:13.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.889 issued rwts: total=724,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.889 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:13.889 job1: (groupid=0, jobs=1): err= 0: pid=966592: Wed Nov 20 09:19:39 2024 00:34:13.889 read: IOPS=719, BW=2877KiB/s (2946kB/s)(2880KiB/1001msec) 00:34:13.889 slat (nsec): min=6908, max=48674, avg=22763.13, stdev=7447.68 00:34:13.889 clat (usec): min=414, max=1066, avg=732.17, stdev=78.87 00:34:13.889 lat (usec): min=422, max=1091, avg=754.93, stdev=80.32 00:34:13.889 clat percentiles (usec): 00:34:13.889 | 1.00th=[ 545], 5.00th=[ 603], 10.00th=[ 627], 20.00th=[ 668], 00:34:13.889 | 30.00th=[ 701], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 750], 00:34:13.889 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 816], 95.00th=[ 840], 00:34:13.889 | 99.00th=[ 947], 99.50th=[ 996], 99.90th=[ 1074], 99.95th=[ 1074], 00:34:13.889 | 99.99th=[ 1074] 00:34:13.889 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:13.889 slat (nsec): min=9395, max=64125, avg=26916.07, stdev=10378.73 00:34:13.889 clat (usec): min=118, max=730, avg=407.48, stdev=97.55 00:34:13.889 lat (usec): min=129, max=763, avg=434.39, stdev=101.14 00:34:13.889 clat percentiles (usec): 00:34:13.889 | 1.00th=[ 149], 5.00th=[ 255], 10.00th=[ 281], 20.00th=[ 322], 00:34:13.889 | 30.00th=[ 355], 40.00th=[ 388], 50.00th=[ 420], 60.00th=[ 445], 00:34:13.889 | 70.00th=[ 461], 80.00th=[ 486], 90.00th=[ 523], 95.00th=[ 562], 00:34:13.889 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 709], 99.95th=[ 734], 00:34:13.889 | 99.99th=[ 734] 00:34:13.889 bw ( KiB/s): min= 4096, max= 4096, per=33.93%, avg=4096.00, stdev= 0.00, samples=1 00:34:13.889 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:13.889 lat (usec) : 250=2.58%, 500=47.65%, 750=32.34%, 1000=17.26% 00:34:13.889 lat (msec) : 2=0.17% 00:34:13.890 cpu : usr=2.50%, sys=4.40%, ctx=1744, majf=0, minf=1 00:34:13.890 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.890 issued rwts: total=720,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:13.890 job2: (groupid=0, jobs=1): err= 0: pid=966593: Wed Nov 20 09:19:39 2024 00:34:13.890 read: IOPS=31, BW=127KiB/s (130kB/s)(128KiB/1009msec) 00:34:13.890 slat (nsec): min=7227, max=27538, avg=23713.91, stdev=6707.10 00:34:13.890 clat (usec): min=694, max=41935, avg=23553.21, stdev=20313.19 00:34:13.890 lat (usec): min=719, max=41961, avg=23576.93, stdev=20316.76 00:34:13.890 clat percentiles (usec): 00:34:13.890 | 1.00th=[ 693], 5.00th=[ 725], 10.00th=[ 791], 20.00th=[ 873], 00:34:13.890 | 30.00th=[ 930], 40.00th=[ 1057], 50.00th=[41157], 60.00th=[41157], 00:34:13.890 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:13.890 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:13.890 | 99.99th=[41681] 00:34:13.890 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:34:13.890 slat (nsec): min=10063, max=54051, avg=29951.32, stdev=10521.47 00:34:13.890 clat (usec): min=216, max=1051, avg=458.21, stdev=120.27 00:34:13.890 lat (usec): min=252, max=1087, avg=488.16, stdev=123.58 00:34:13.890 clat percentiles (usec): 00:34:13.890 | 1.00th=[ 245], 5.00th=[ 293], 10.00th=[ 318], 20.00th=[ 355], 00:34:13.890 | 30.00th=[ 383], 40.00th=[ 412], 50.00th=[ 449], 60.00th=[ 482], 00:34:13.890 | 70.00th=[ 519], 80.00th=[ 545], 90.00th=[ 603], 95.00th=[ 660], 00:34:13.890 | 99.00th=[ 816], 99.50th=[ 938], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:13.890 | 99.99th=[ 1057] 00:34:13.890 bw ( KiB/s): min= 4096, max= 4096, per=33.93%, avg=4096.00, stdev= 0.00, samples=1 00:34:13.890 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:13.890 lat (usec) : 250=1.84%, 500=59.56%, 750=31.62%, 1000=2.76% 00:34:13.890 lat (msec) : 2=0.92%, 50=3.31% 00:34:13.890 cpu : usr=0.99%, sys=1.29%, ctx=545, majf=0, minf=1 00:34:13.890 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.890 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:13.890 job3: (groupid=0, jobs=1): err= 0: pid=966594: Wed Nov 20 09:19:39 2024 00:34:13.890 read: IOPS=18, BW=74.7KiB/s (76.4kB/s)(76.0KiB/1018msec) 00:34:13.890 slat (nsec): min=26232, max=26955, avg=26491.26, stdev=161.20 00:34:13.890 clat (usec): min=40783, max=41110, avg=40965.29, stdev=79.27 00:34:13.890 lat (usec): min=40809, max=41136, avg=40991.78, stdev=79.29 00:34:13.890 clat percentiles (usec): 00:34:13.890 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:13.890 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:13.890 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:13.890 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:13.890 | 99.99th=[41157] 00:34:13.890 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:34:13.890 slat (nsec): min=9583, max=51977, avg=28065.26, stdev=10812.98 00:34:13.890 clat (usec): min=219, max=3478, avg=432.59, stdev=165.89 00:34:13.890 lat (usec): min=233, max=3514, avg=460.65, stdev=168.26 00:34:13.890 clat percentiles (usec): 00:34:13.890 | 1.00th=[ 239], 5.00th=[ 277], 10.00th=[ 306], 20.00th=[ 338], 00:34:13.890 | 30.00th=[ 363], 40.00th=[ 388], 50.00th=[ 420], 60.00th=[ 453], 00:34:13.890 | 70.00th=[ 486], 80.00th=[ 519], 90.00th=[ 553], 95.00th=[ 578], 00:34:13.890 | 99.00th=[ 635], 99.50th=[ 758], 99.90th=[ 3490], 99.95th=[ 3490], 00:34:13.890 | 99.99th=[ 3490] 00:34:13.890 bw ( KiB/s): min= 4096, max= 4096, per=33.93%, avg=4096.00, stdev= 0.00, samples=1 00:34:13.890 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:13.890 lat (usec) : 250=1.69%, 500=69.87%, 750=24.29%, 1000=0.38% 00:34:13.890 lat (msec) : 4=0.19%, 50=3.58% 00:34:13.890 cpu : usr=0.69%, sys=1.38%, ctx=531, majf=0, minf=2 00:34:13.890 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.890 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:13.890 00:34:13.890 Run status group 0 (all jobs): 00:34:13.890 READ: bw=5874KiB/s (6015kB/s), 74.7KiB/s-2893KiB/s (76.4kB/s-2963kB/s), io=5980KiB (6124kB), run=1001-1018msec 00:34:13.890 WRITE: bw=11.8MiB/s (12.4MB/s), 2012KiB/s-4092KiB/s (2060kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1018msec 00:34:13.890 00:34:13.890 Disk stats (read/write): 00:34:13.890 nvme0n1: ios=564/1022, merge=0/0, ticks=1190/305, in_queue=1495, util=96.49% 00:34:13.890 nvme0n2: ios=532/1002, merge=0/0, ticks=470/392, in_queue=862, util=90.09% 00:34:13.890 nvme0n3: ios=84/512, merge=0/0, ticks=871/225, in_queue=1096, util=96.51% 00:34:13.890 nvme0n4: ios=41/512, merge=0/0, ticks=853/199, in_queue=1052, util=91.55% 00:34:13.890 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:13.890 [global] 00:34:13.890 thread=1 00:34:13.890 invalidate=1 00:34:13.890 rw=randwrite 00:34:13.890 time_based=1 00:34:13.890 runtime=1 00:34:13.890 ioengine=libaio 00:34:13.890 direct=1 00:34:13.890 bs=4096 00:34:13.890 iodepth=1 00:34:13.890 norandommap=0 00:34:13.890 numjobs=1 00:34:13.890 00:34:13.890 verify_dump=1 00:34:13.890 verify_backlog=512 00:34:13.890 verify_state_save=0 00:34:13.890 do_verify=1 00:34:13.890 verify=crc32c-intel 00:34:13.890 [job0] 00:34:13.890 filename=/dev/nvme0n1 00:34:13.890 [job1] 00:34:13.890 filename=/dev/nvme0n2 00:34:13.890 [job2] 00:34:13.890 filename=/dev/nvme0n3 00:34:13.890 [job3] 00:34:13.890 filename=/dev/nvme0n4 00:34:13.890 Could not set queue depth (nvme0n1) 00:34:13.890 Could not set queue depth (nvme0n2) 00:34:13.890 Could not set queue depth (nvme0n3) 00:34:13.890 Could not set queue depth (nvme0n4) 00:34:14.149 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:14.149 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:14.149 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:14.149 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:14.149 fio-3.35 00:34:14.149 Starting 4 threads 00:34:15.531 00:34:15.531 job0: (groupid=0, jobs=1): err= 0: pid=967108: Wed Nov 20 09:19:40 2024 00:34:15.531 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:15.531 slat (nsec): min=6575, max=57397, avg=27296.61, stdev=3360.01 00:34:15.531 clat (usec): min=568, max=1411, avg=1011.80, stdev=98.47 00:34:15.531 lat (usec): min=596, max=1442, avg=1039.10, stdev=98.94 00:34:15.531 clat percentiles (usec): 00:34:15.531 | 1.00th=[ 750], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 938], 00:34:15.531 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1045], 00:34:15.532 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:34:15.532 | 99.00th=[ 1221], 99.50th=[ 1221], 99.90th=[ 1418], 99.95th=[ 1418], 00:34:15.532 | 99.99th=[ 1418] 00:34:15.532 write: IOPS=685, BW=2741KiB/s (2807kB/s)(2744KiB/1001msec); 0 zone resets 00:34:15.532 slat (nsec): min=9289, max=68983, avg=31351.33, stdev=9041.13 00:34:15.532 clat (usec): min=241, max=1772, avg=637.32, stdev=142.98 00:34:15.532 lat (usec): min=251, max=1782, avg=668.67, stdev=145.60 00:34:15.532 clat percentiles (usec): 00:34:15.532 | 1.00th=[ 322], 5.00th=[ 408], 10.00th=[ 461], 20.00th=[ 523], 00:34:15.532 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 676], 00:34:15.532 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 832], 00:34:15.532 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1778], 99.95th=[ 1778], 00:34:15.532 | 99.99th=[ 1778] 00:34:15.532 bw ( KiB/s): min= 4096, max= 4096, per=43.62%, avg=4096.00, stdev= 0.00, samples=1 00:34:15.532 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:15.532 lat (usec) : 250=0.17%, 500=9.68%, 750=36.31%, 1000=28.55% 00:34:15.532 lat (msec) : 2=25.29% 00:34:15.532 cpu : usr=3.60%, sys=3.60%, ctx=1201, majf=0, minf=1 00:34:15.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:15.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:15.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:15.532 issued rwts: total=512,686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:15.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:15.532 job1: (groupid=0, jobs=1): err= 0: pid=967109: Wed Nov 20 09:19:40 2024 00:34:15.532 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:15.532 slat (nsec): min=7470, max=77398, avg=28506.14, stdev=4834.33 00:34:15.532 clat (usec): min=415, max=1991, avg=1077.98, stdev=158.55 00:34:15.532 lat (usec): min=444, max=2025, avg=1106.49, stdev=158.86 00:34:15.532 clat percentiles (usec): 00:34:15.532 | 1.00th=[ 635], 5.00th=[ 783], 10.00th=[ 873], 20.00th=[ 979], 00:34:15.532 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1106], 60.00th=[ 1123], 00:34:15.532 | 70.00th=[ 1156], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1287], 00:34:15.532 | 99.00th=[ 1369], 99.50th=[ 1401], 99.90th=[ 1991], 99.95th=[ 1991], 00:34:15.532 | 99.99th=[ 1991] 00:34:15.532 write: IOPS=674, BW=2697KiB/s (2762kB/s)(2700KiB/1001msec); 0 zone resets 00:34:15.532 slat (nsec): min=9107, max=56979, avg=32077.77, stdev=8773.67 00:34:15.532 clat (usec): min=127, max=3476, avg=595.86, stdev=186.03 00:34:15.532 lat (usec): min=137, max=3530, avg=627.93, stdev=189.41 00:34:15.532 clat percentiles (usec): 00:34:15.532 | 1.00th=[ 217], 5.00th=[ 330], 10.00th=[ 383], 20.00th=[ 478], 00:34:15.532 | 30.00th=[ 523], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:34:15.532 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 824], 00:34:15.532 | 99.00th=[ 881], 99.50th=[ 906], 99.90th=[ 3490], 99.95th=[ 3490], 00:34:15.532 | 99.99th=[ 3490] 00:34:15.532 bw ( KiB/s): min= 4096, max= 4096, per=43.62%, avg=4096.00, stdev= 0.00, samples=1 00:34:15.532 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:15.532 lat (usec) : 250=1.18%, 500=12.89%, 750=36.73%, 1000=16.68% 00:34:15.532 lat (msec) : 2=32.43%, 4=0.08% 00:34:15.532 cpu : usr=2.70%, sys=4.70%, ctx=1188, majf=0, minf=1 00:34:15.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:15.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:15.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:15.532 issued rwts: total=512,675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:15.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:15.532 job2: (groupid=0, jobs=1): err= 0: pid=967110: Wed Nov 20 09:19:40 2024 00:34:15.532 read: IOPS=150, BW=603KiB/s (618kB/s)(604KiB/1001msec) 00:34:15.532 slat (nsec): min=7369, max=43251, avg=28617.53, stdev=2678.14 00:34:15.532 clat (usec): min=489, max=42016, avg=4028.75, stdev=10609.56 00:34:15.532 lat (usec): min=518, max=42044, avg=4057.36, stdev=10609.42 00:34:15.532 clat percentiles (usec): 00:34:15.532 | 1.00th=[ 619], 5.00th=[ 832], 10.00th=[ 865], 20.00th=[ 988], 00:34:15.532 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:34:15.532 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1254], 95.00th=[41681], 00:34:15.532 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:15.532 | 99.99th=[42206] 00:34:15.532 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:15.532 slat (nsec): min=9632, max=56889, avg=32862.85, stdev=9296.85 00:34:15.532 clat (usec): min=213, max=1871, avg=715.17, stdev=145.25 00:34:15.532 lat (usec): min=224, max=1895, avg=748.03, stdev=147.68 00:34:15.532 clat percentiles (usec): 00:34:15.532 | 1.00th=[ 371], 5.00th=[ 469], 10.00th=[ 537], 20.00th=[ 619], 00:34:15.532 | 30.00th=[ 660], 40.00th=[ 693], 50.00th=[ 725], 60.00th=[ 750], 00:34:15.532 | 70.00th=[ 783], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 898], 00:34:15.532 | 99.00th=[ 979], 99.50th=[ 1221], 99.90th=[ 1876], 99.95th=[ 1876], 00:34:15.532 | 99.99th=[ 1876] 00:34:15.532 bw ( KiB/s): min= 4096, max= 4096, per=43.62%, avg=4096.00, stdev= 0.00, samples=1 00:34:15.532 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:15.532 lat (usec) : 250=0.15%, 500=6.18%, 750=40.42%, 1000=34.84% 00:34:15.532 lat (msec) : 2=16.59%, 4=0.15%, 50=1.66% 00:34:15.532 cpu : usr=0.90%, sys=3.20%, ctx=664, majf=0, minf=1 00:34:15.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:15.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:15.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:15.532 issued rwts: total=151,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:15.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:15.532 job3: (groupid=0, jobs=1): err= 0: pid=967113: Wed Nov 20 09:19:40 2024 00:34:15.532 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1016msec) 00:34:15.532 slat (nsec): min=26188, max=27071, avg=26493.00, stdev=242.45 00:34:15.532 clat (usec): min=1522, max=42144, avg=39444.27, stdev=9778.69 00:34:15.532 lat (usec): min=1548, max=42171, avg=39470.77, stdev=9778.71 00:34:15.532 clat percentiles (usec): 00:34:15.533 | 1.00th=[ 1516], 5.00th=[ 1516], 10.00th=[41157], 20.00th=[41681], 00:34:15.533 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:15.533 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:15.533 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:15.533 | 99.99th=[42206] 00:34:15.533 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:34:15.533 slat (nsec): min=3638, max=51722, avg=27202.06, stdev=11839.67 00:34:15.533 clat (usec): min=244, max=1058, avg=638.61, stdev=147.96 00:34:15.533 lat (usec): min=254, max=1108, avg=665.82, stdev=153.15 00:34:15.533 clat percentiles (usec): 00:34:15.533 | 1.00th=[ 326], 5.00th=[ 392], 10.00th=[ 437], 20.00th=[ 502], 00:34:15.533 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 644], 60.00th=[ 685], 00:34:15.533 | 70.00th=[ 734], 80.00th=[ 775], 90.00th=[ 816], 95.00th=[ 873], 00:34:15.533 | 99.00th=[ 938], 99.50th=[ 979], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:15.533 | 99.99th=[ 1057] 00:34:15.533 bw ( KiB/s): min= 4096, max= 4096, per=43.62%, avg=4096.00, stdev= 0.00, samples=1 00:34:15.533 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:15.533 lat (usec) : 250=0.57%, 500=18.53%, 750=52.17%, 1000=25.14% 00:34:15.533 lat (msec) : 2=0.57%, 50=3.02% 00:34:15.533 cpu : usr=0.49%, sys=1.58%, ctx=530, majf=0, minf=1 00:34:15.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:15.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:15.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:15.533 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:15.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:15.533 00:34:15.533 Run status group 0 (all jobs): 00:34:15.533 READ: bw=4693KiB/s (4806kB/s), 66.9KiB/s-2046KiB/s (68.5kB/s-2095kB/s), io=4768KiB (4882kB), run=1001-1016msec 00:34:15.533 WRITE: bw=9390KiB/s (9615kB/s), 2016KiB/s-2741KiB/s (2064kB/s-2807kB/s), io=9540KiB (9769kB), run=1001-1016msec 00:34:15.533 00:34:15.533 Disk stats (read/write): 00:34:15.533 nvme0n1: ios=513/512, merge=0/0, ticks=1020/252, in_queue=1272, util=96.99% 00:34:15.533 nvme0n2: ios=517/512, merge=0/0, ticks=773/232, in_queue=1005, util=97.25% 00:34:15.533 nvme0n3: ios=69/512, merge=0/0, ticks=1636/300, in_queue=1936, util=97.15% 00:34:15.533 nvme0n4: ios=60/512, merge=0/0, ticks=605/309, in_queue=914, util=100.00% 00:34:15.533 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:15.533 [global] 00:34:15.533 thread=1 00:34:15.533 invalidate=1 00:34:15.533 rw=write 00:34:15.533 time_based=1 00:34:15.533 runtime=1 00:34:15.533 ioengine=libaio 00:34:15.533 direct=1 00:34:15.533 bs=4096 00:34:15.533 iodepth=128 00:34:15.533 norandommap=0 00:34:15.533 numjobs=1 00:34:15.533 00:34:15.533 verify_dump=1 00:34:15.533 verify_backlog=512 00:34:15.533 verify_state_save=0 00:34:15.533 do_verify=1 00:34:15.533 verify=crc32c-intel 00:34:15.533 [job0] 00:34:15.533 filename=/dev/nvme0n1 00:34:15.533 [job1] 00:34:15.533 filename=/dev/nvme0n2 00:34:15.533 [job2] 00:34:15.533 filename=/dev/nvme0n3 00:34:15.533 [job3] 00:34:15.533 filename=/dev/nvme0n4 00:34:15.533 Could not set queue depth (nvme0n1) 00:34:15.533 Could not set queue depth (nvme0n2) 00:34:15.533 Could not set queue depth (nvme0n3) 00:34:15.533 Could not set queue depth (nvme0n4) 00:34:15.793 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:15.793 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:15.793 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:15.793 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:15.793 fio-3.35 00:34:15.793 Starting 4 threads 00:34:17.178 00:34:17.178 job0: (groupid=0, jobs=1): err= 0: pid=967634: Wed Nov 20 09:19:42 2024 00:34:17.178 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.5MiB/1047msec) 00:34:17.178 slat (nsec): min=917, max=10769k, avg=98808.68, stdev=685686.19 00:34:17.178 clat (usec): min=5849, max=72855, avg=13352.68, stdev=9414.16 00:34:17.178 lat (usec): min=5860, max=83624, avg=13451.49, stdev=9472.53 00:34:17.178 clat percentiles (usec): 00:34:17.178 | 1.00th=[ 6259], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[ 8455], 00:34:17.178 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11469], 60.00th=[12649], 00:34:17.179 | 70.00th=[13566], 80.00th=[14746], 90.00th=[17433], 95.00th=[21365], 00:34:17.179 | 99.00th=[61604], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:34:17.179 | 99.99th=[72877] 00:34:17.179 write: IOPS=4401, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1047msec); 0 zone resets 00:34:17.179 slat (nsec): min=1550, max=8640.6k, avg=121023.34, stdev=683615.00 00:34:17.179 clat (usec): min=588, max=71455, avg=16497.88, stdev=14647.90 00:34:17.179 lat (usec): min=717, max=71462, avg=16618.90, stdev=14741.99 00:34:17.179 clat percentiles (usec): 00:34:17.179 | 1.00th=[ 5473], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 7767], 00:34:17.179 | 30.00th=[ 8586], 40.00th=[10552], 50.00th=[10814], 60.00th=[11338], 00:34:17.179 | 70.00th=[14615], 80.00th=[16319], 90.00th=[42206], 95.00th=[54264], 00:34:17.179 | 99.00th=[64750], 99.50th=[66323], 99.90th=[69731], 99.95th=[71828], 00:34:17.179 | 99.99th=[71828] 00:34:17.179 bw ( KiB/s): min=12288, max=24576, per=20.94%, avg=18432.00, stdev=8688.93, samples=2 00:34:17.179 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:34:17.179 lat (usec) : 750=0.02%, 1000=0.07% 00:34:17.179 lat (msec) : 2=0.12%, 4=0.01%, 10=35.99%, 20=51.52%, 50=6.71% 00:34:17.179 lat (msec) : 100=5.56% 00:34:17.179 cpu : usr=2.39%, sys=5.64%, ctx=358, majf=0, minf=1 00:34:17.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:17.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:17.179 issued rwts: total=4229,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:17.179 job1: (groupid=0, jobs=1): err= 0: pid=967636: Wed Nov 20 09:19:42 2024 00:34:17.179 read: IOPS=5684, BW=22.2MiB/s (23.3MB/s)(22.2MiB/1002msec) 00:34:17.179 slat (nsec): min=887, max=11714k, avg=84443.77, stdev=570474.80 00:34:17.179 clat (usec): min=1271, max=27301, avg=10688.08, stdev=3861.68 00:34:17.179 lat (usec): min=2368, max=27305, avg=10772.52, stdev=3908.67 00:34:17.179 clat percentiles (usec): 00:34:17.179 | 1.00th=[ 5014], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7701], 00:34:17.179 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[10683], 00:34:17.179 | 70.00th=[11731], 80.00th=[13829], 90.00th=[16712], 95.00th=[18744], 00:34:17.179 | 99.00th=[21890], 99.50th=[22152], 99.90th=[27395], 99.95th=[27395], 00:34:17.179 | 99.99th=[27395] 00:34:17.179 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:34:17.179 slat (nsec): min=1512, max=6621.9k, avg=79467.29, stdev=454481.82 00:34:17.179 clat (usec): min=3265, max=43056, avg=10727.23, stdev=6407.81 00:34:17.179 lat (usec): min=3267, max=43060, avg=10806.70, stdev=6460.42 00:34:17.179 clat percentiles (usec): 00:34:17.179 | 1.00th=[ 4883], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 6980], 00:34:17.179 | 30.00th=[ 7242], 40.00th=[ 7963], 50.00th=[ 8356], 60.00th=[ 9241], 00:34:17.179 | 70.00th=[10945], 80.00th=[12387], 90.00th=[20317], 95.00th=[23987], 00:34:17.179 | 99.00th=[38536], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:34:17.179 | 99.99th=[43254] 00:34:17.179 bw ( KiB/s): min=19144, max=29504, per=27.63%, avg=24324.00, stdev=7325.63, samples=2 00:34:17.179 iops : min= 4786, max= 7376, avg=6081.00, stdev=1831.41, samples=2 00:34:17.179 lat (msec) : 2=0.01%, 4=0.25%, 10=60.20%, 20=32.88%, 50=6.66% 00:34:17.179 cpu : usr=4.40%, sys=5.79%, ctx=548, majf=0, minf=1 00:34:17.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:17.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:17.179 issued rwts: total=5696,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:17.179 job2: (groupid=0, jobs=1): err= 0: pid=967637: Wed Nov 20 09:19:42 2024 00:34:17.179 read: IOPS=7142, BW=27.9MiB/s (29.3MB/s)(29.2MiB/1047msec) 00:34:17.179 slat (nsec): min=963, max=13373k, avg=67242.63, stdev=566725.25 00:34:17.179 clat (usec): min=1882, max=55539, avg=10049.43, stdev=6255.25 00:34:17.179 lat (usec): min=1891, max=61465, avg=10116.68, stdev=6280.72 00:34:17.179 clat percentiles (usec): 00:34:17.179 | 1.00th=[ 3949], 5.00th=[ 5866], 10.00th=[ 6652], 20.00th=[ 7242], 00:34:17.179 | 30.00th=[ 7570], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9110], 00:34:17.179 | 70.00th=[10028], 80.00th=[11863], 90.00th=[14091], 95.00th=[15664], 00:34:17.179 | 99.00th=[51119], 99.50th=[51643], 99.90th=[55313], 99.95th=[55313], 00:34:17.179 | 99.99th=[55313] 00:34:17.179 write: IOPS=7335, BW=28.7MiB/s (30.0MB/s)(30.0MiB/1047msec); 0 zone resets 00:34:17.179 slat (nsec): min=1630, max=7729.8k, avg=53173.51, stdev=422036.83 00:34:17.179 clat (usec): min=646, max=25956, avg=7518.93, stdev=2847.94 00:34:17.179 lat (usec): min=816, max=25958, avg=7572.10, stdev=2867.94 00:34:17.179 clat percentiles (usec): 00:34:17.179 | 1.00th=[ 1631], 5.00th=[ 4047], 10.00th=[ 4621], 20.00th=[ 5473], 00:34:17.179 | 30.00th=[ 6325], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7439], 00:34:17.179 | 70.00th=[ 8160], 80.00th=[ 9241], 90.00th=[11076], 95.00th=[12911], 00:34:17.179 | 99.00th=[17957], 99.50th=[20317], 99.90th=[22152], 99.95th=[22414], 00:34:17.179 | 99.99th=[26084] 00:34:17.179 bw ( KiB/s): min=28720, max=32720, per=34.90%, avg=30720.00, stdev=2828.43, samples=2 00:34:17.179 iops : min= 7180, max= 8180, avg=7680.00, stdev=707.11, samples=2 00:34:17.179 lat (usec) : 750=0.01%, 1000=0.01% 00:34:17.179 lat (msec) : 2=0.73%, 4=2.16%, 10=75.18%, 20=20.62%, 50=0.46% 00:34:17.179 lat (msec) : 100=0.83% 00:34:17.179 cpu : usr=6.02%, sys=7.36%, ctx=388, majf=0, minf=2 00:34:17.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:17.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:17.179 issued rwts: total=7478,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:17.179 job3: (groupid=0, jobs=1): err= 0: pid=967638: Wed Nov 20 09:19:42 2024 00:34:17.179 read: IOPS=4374, BW=17.1MiB/s (17.9MB/s)(17.9MiB/1047msec) 00:34:17.179 slat (nsec): min=934, max=15505k, avg=116893.07, stdev=807675.73 00:34:17.179 clat (usec): min=568, max=61464, avg=16095.39, stdev=9782.83 00:34:17.179 lat (usec): min=575, max=71033, avg=16212.28, stdev=9846.59 00:34:17.179 clat percentiles (usec): 00:34:17.179 | 1.00th=[ 3556], 5.00th=[ 6587], 10.00th=[ 7898], 20.00th=[ 8979], 00:34:17.179 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[13304], 60.00th=[16188], 00:34:17.179 | 70.00th=[19006], 80.00th=[21365], 90.00th=[26346], 95.00th=[31589], 00:34:17.179 | 99.00th=[61080], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:34:17.179 | 99.99th=[61604] 00:34:17.179 write: IOPS=4401, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1047msec); 0 zone resets 00:34:17.179 slat (nsec): min=1557, max=15906k, avg=93270.55, stdev=594041.09 00:34:17.179 clat (usec): min=303, max=34998, avg=12707.52, stdev=5706.85 00:34:17.179 lat (usec): min=336, max=35032, avg=12800.79, stdev=5756.86 00:34:17.179 clat percentiles (usec): 00:34:17.179 | 1.00th=[ 3359], 5.00th=[ 5866], 10.00th=[ 6652], 20.00th=[ 8455], 00:34:17.179 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[11207], 60.00th=[12256], 00:34:17.179 | 70.00th=[14615], 80.00th=[17957], 90.00th=[21890], 95.00th=[25035], 00:34:17.179 | 99.00th=[26346], 99.50th=[26870], 99.90th=[29230], 99.95th=[30278], 00:34:17.179 | 99.99th=[34866] 00:34:17.179 bw ( KiB/s): min=17648, max=19216, per=20.94%, avg=18432.00, stdev=1108.74, samples=2 00:34:17.179 iops : min= 4412, max= 4804, avg=4608.00, stdev=277.19, samples=2 00:34:17.179 lat (usec) : 500=0.02%, 750=0.22%, 1000=0.16% 00:34:17.179 lat (msec) : 2=0.13%, 4=0.66%, 10=37.37%, 20=40.92%, 50=19.13% 00:34:17.179 lat (msec) : 100=1.37% 00:34:17.179 cpu : usr=4.30%, sys=3.25%, ctx=391, majf=0, minf=2 00:34:17.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:17.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:17.179 issued rwts: total=4580,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:17.179 00:34:17.179 Run status group 0 (all jobs): 00:34:17.179 READ: bw=82.0MiB/s (86.0MB/s), 15.8MiB/s-27.9MiB/s (16.5MB/s-29.3MB/s), io=85.9MiB (90.0MB), run=1002-1047msec 00:34:17.179 WRITE: bw=86.0MiB/s (90.1MB/s), 17.2MiB/s-28.7MiB/s (18.0MB/s-30.0MB/s), io=90.0MiB (94.4MB), run=1002-1047msec 00:34:17.179 00:34:17.179 Disk stats (read/write): 00:34:17.179 nvme0n1: ios=3858/4096, merge=0/0, ticks=24872/40534, in_queue=65406, util=90.58% 00:34:17.179 nvme0n2: ios=4203/4608, merge=0/0, ticks=24284/25213, in_queue=49497, util=87.21% 00:34:17.179 nvme0n3: ios=6355/6656, merge=0/0, ticks=54075/45701, in_queue=99776, util=91.20% 00:34:17.179 nvme0n4: ios=3419/3584, merge=0/0, ticks=20910/18267, in_queue=39177, util=95.92% 00:34:17.179 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:17.179 [global] 00:34:17.179 thread=1 00:34:17.179 invalidate=1 00:34:17.179 rw=randwrite 00:34:17.179 time_based=1 00:34:17.179 runtime=1 00:34:17.179 ioengine=libaio 00:34:17.179 direct=1 00:34:17.180 bs=4096 00:34:17.180 iodepth=128 00:34:17.180 norandommap=0 00:34:17.180 numjobs=1 00:34:17.180 00:34:17.180 verify_dump=1 00:34:17.180 verify_backlog=512 00:34:17.180 verify_state_save=0 00:34:17.180 do_verify=1 00:34:17.180 verify=crc32c-intel 00:34:17.180 [job0] 00:34:17.180 filename=/dev/nvme0n1 00:34:17.180 [job1] 00:34:17.180 filename=/dev/nvme0n2 00:34:17.180 [job2] 00:34:17.180 filename=/dev/nvme0n3 00:34:17.180 [job3] 00:34:17.180 filename=/dev/nvme0n4 00:34:17.180 Could not set queue depth (nvme0n1) 00:34:17.180 Could not set queue depth (nvme0n2) 00:34:17.180 Could not set queue depth (nvme0n3) 00:34:17.180 Could not set queue depth (nvme0n4) 00:34:17.440 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:17.440 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:17.440 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:17.440 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:17.440 fio-3.35 00:34:17.440 Starting 4 threads 00:34:18.826 00:34:18.826 job0: (groupid=0, jobs=1): err= 0: pid=968091: Wed Nov 20 09:19:44 2024 00:34:18.826 read: IOPS=5844, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1002msec) 00:34:18.826 slat (nsec): min=976, max=10325k, avg=82361.53, stdev=594539.32 00:34:18.826 clat (usec): min=1000, max=46675, avg=10545.55, stdev=4316.29 00:34:18.826 lat (usec): min=5499, max=46679, avg=10627.91, stdev=4364.14 00:34:18.826 clat percentiles (usec): 00:34:18.826 | 1.00th=[ 6128], 5.00th=[ 6849], 10.00th=[ 7242], 20.00th=[ 7832], 00:34:18.826 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10159], 00:34:18.826 | 70.00th=[11469], 80.00th=[12518], 90.00th=[14353], 95.00th=[16581], 00:34:18.826 | 99.00th=[34866], 99.50th=[40109], 99.90th=[44303], 99.95th=[46924], 00:34:18.826 | 99.99th=[46924] 00:34:18.826 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:34:18.826 slat (nsec): min=1636, max=7482.1k, avg=79267.85, stdev=501662.04 00:34:18.826 clat (usec): min=1238, max=46669, avg=10627.17, stdev=5822.95 00:34:18.826 lat (usec): min=1249, max=46672, avg=10706.44, stdev=5864.92 00:34:18.826 clat percentiles (usec): 00:34:18.826 | 1.00th=[ 4359], 5.00th=[ 5211], 10.00th=[ 6128], 20.00th=[ 6980], 00:34:18.826 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8717], 60.00th=[ 9765], 00:34:18.826 | 70.00th=[11076], 80.00th=[13173], 90.00th=[18744], 95.00th=[22676], 00:34:18.826 | 99.00th=[33424], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:34:18.826 | 99.99th=[46924] 00:34:18.826 bw ( KiB/s): min=23464, max=25688, per=27.99%, avg=24576.00, stdev=1572.61, samples=2 00:34:18.826 iops : min= 5866, max= 6422, avg=6144.00, stdev=393.15, samples=2 00:34:18.826 lat (msec) : 2=0.05%, 4=0.12%, 10=59.82%, 20=34.86%, 50=5.14% 00:34:18.826 cpu : usr=3.40%, sys=6.89%, ctx=376, majf=0, minf=1 00:34:18.826 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:18.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:18.826 issued rwts: total=5856,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:18.826 job1: (groupid=0, jobs=1): err= 0: pid=968104: Wed Nov 20 09:19:44 2024 00:34:18.826 read: IOPS=4761, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1003msec) 00:34:18.826 slat (nsec): min=929, max=11511k, avg=115110.58, stdev=691870.36 00:34:18.826 clat (usec): min=1046, max=50222, avg=13057.23, stdev=7563.43 00:34:18.826 lat (usec): min=4076, max=50229, avg=13172.34, stdev=7614.98 00:34:18.826 clat percentiles (usec): 00:34:18.826 | 1.00th=[ 4490], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7504], 00:34:18.826 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[11731], 00:34:18.826 | 70.00th=[14746], 80.00th=[19792], 90.00th=[24773], 95.00th=[27395], 00:34:18.826 | 99.00th=[37487], 99.50th=[43254], 99.90th=[50070], 99.95th=[50070], 00:34:18.826 | 99.99th=[50070] 00:34:18.826 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:34:18.826 slat (nsec): min=1561, max=15070k, avg=84134.94, stdev=530208.11 00:34:18.826 clat (usec): min=3719, max=47937, avg=12605.18, stdev=8755.38 00:34:18.826 lat (usec): min=3727, max=47945, avg=12689.31, stdev=8793.19 00:34:18.826 clat percentiles (usec): 00:34:18.826 | 1.00th=[ 4555], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 6652], 00:34:18.826 | 30.00th=[ 7046], 40.00th=[ 7308], 50.00th=[ 8160], 60.00th=[10421], 00:34:18.826 | 70.00th=[14091], 80.00th=[18482], 90.00th=[27395], 95.00th=[32900], 00:34:18.826 | 99.00th=[38011], 99.50th=[42730], 99.90th=[43779], 99.95th=[47973], 00:34:18.826 | 99.99th=[47973] 00:34:18.826 bw ( KiB/s): min=16384, max=24576, per=23.33%, avg=20480.00, stdev=5792.62, samples=2 00:34:18.826 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:34:18.826 lat (msec) : 2=0.01%, 4=0.06%, 10=55.90%, 20=25.68%, 50=18.24% 00:34:18.826 lat (msec) : 100=0.11% 00:34:18.826 cpu : usr=2.59%, sys=3.39%, ctx=702, majf=0, minf=1 00:34:18.826 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:18.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:18.826 issued rwts: total=4776,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:18.826 job2: (groupid=0, jobs=1): err= 0: pid=968122: Wed Nov 20 09:19:44 2024 00:34:18.826 read: IOPS=4662, BW=18.2MiB/s (19.1MB/s)(18.2MiB/1002msec) 00:34:18.826 slat (nsec): min=969, max=19513k, avg=106598.46, stdev=783387.84 00:34:18.826 clat (usec): min=1567, max=51702, avg=13708.76, stdev=9099.99 00:34:18.826 lat (usec): min=3730, max=51709, avg=13815.36, stdev=9163.09 00:34:18.826 clat percentiles (usec): 00:34:18.826 | 1.00th=[ 4228], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7046], 00:34:18.826 | 30.00th=[ 8225], 40.00th=[ 9241], 50.00th=[10683], 60.00th=[12125], 00:34:18.826 | 70.00th=[14615], 80.00th=[17433], 90.00th=[28705], 95.00th=[34341], 00:34:18.826 | 99.00th=[48497], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:34:18.826 | 99.99th=[51643] 00:34:18.826 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:34:18.826 slat (nsec): min=1623, max=12903k, avg=90625.05, stdev=616382.48 00:34:18.826 clat (usec): min=1214, max=46243, avg=12185.50, stdev=6248.63 00:34:18.826 lat (usec): min=1226, max=46267, avg=12276.13, stdev=6301.88 00:34:18.826 clat percentiles (usec): 00:34:18.826 | 1.00th=[ 4228], 5.00th=[ 5342], 10.00th=[ 6063], 20.00th=[ 6849], 00:34:18.826 | 30.00th=[ 8094], 40.00th=[ 9503], 50.00th=[11469], 60.00th=[12387], 00:34:18.826 | 70.00th=[13173], 80.00th=[15926], 90.00th=[20579], 95.00th=[25822], 00:34:18.826 | 99.00th=[33162], 99.50th=[33162], 99.90th=[33424], 99.95th=[39584], 00:34:18.826 | 99.99th=[46400] 00:34:18.826 bw ( KiB/s): min=20224, max=20232, per=23.04%, avg=20228.00, stdev= 5.66, samples=2 00:34:18.826 iops : min= 5056, max= 5058, avg=5057.00, stdev= 1.41, samples=2 00:34:18.826 lat (msec) : 2=0.10%, 4=0.48%, 10=42.66%, 20=43.78%, 50=12.70% 00:34:18.826 lat (msec) : 100=0.28% 00:34:18.826 cpu : usr=3.70%, sys=5.09%, ctx=326, majf=0, minf=2 00:34:18.826 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:18.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:18.826 issued rwts: total=4672,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:18.826 job3: (groupid=0, jobs=1): err= 0: pid=968129: Wed Nov 20 09:19:44 2024 00:34:18.826 read: IOPS=5149, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1002msec) 00:34:18.826 slat (nsec): min=1029, max=12766k, avg=85241.29, stdev=673496.46 00:34:18.826 clat (usec): min=1586, max=57157, avg=12170.22, stdev=8559.41 00:34:18.826 lat (usec): min=1851, max=57163, avg=12255.46, stdev=8616.35 00:34:18.826 clat percentiles (usec): 00:34:18.826 | 1.00th=[ 3359], 5.00th=[ 4359], 10.00th=[ 6128], 20.00th=[ 7046], 00:34:18.826 | 30.00th=[ 7701], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[10290], 00:34:18.826 | 70.00th=[12518], 80.00th=[14877], 90.00th=[22414], 95.00th=[30278], 00:34:18.826 | 99.00th=[47449], 99.50th=[50594], 99.90th=[57410], 99.95th=[57410], 00:34:18.826 | 99.99th=[57410] 00:34:18.826 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:34:18.826 slat (nsec): min=1662, max=17016k, avg=85153.25, stdev=690420.80 00:34:18.827 clat (usec): min=612, max=48622, avg=11076.87, stdev=7794.68 00:34:18.827 lat (usec): min=645, max=48633, avg=11162.02, stdev=7867.21 00:34:18.827 clat percentiles (usec): 00:34:18.827 | 1.00th=[ 1450], 5.00th=[ 3654], 10.00th=[ 4359], 20.00th=[ 6194], 00:34:18.827 | 30.00th=[ 7177], 40.00th=[ 8094], 50.00th=[ 8848], 60.00th=[ 9896], 00:34:18.827 | 70.00th=[11338], 80.00th=[13042], 90.00th=[18220], 95.00th=[32375], 00:34:18.827 | 99.00th=[37487], 99.50th=[37487], 99.90th=[46400], 99.95th=[46400], 00:34:18.827 | 99.99th=[48497] 00:34:18.827 bw ( KiB/s): min=16384, max=27976, per=25.26%, avg=22180.00, stdev=8196.78, samples=2 00:34:18.827 iops : min= 4096, max= 6994, avg=5545.00, stdev=2049.20, samples=2 00:34:18.827 lat (usec) : 750=0.03%, 1000=0.03% 00:34:18.827 lat (msec) : 2=1.13%, 4=4.19%, 10=53.68%, 20=30.19%, 50=10.38% 00:34:18.827 lat (msec) : 100=0.38% 00:34:18.827 cpu : usr=3.80%, sys=6.49%, ctx=353, majf=0, minf=1 00:34:18.827 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:18.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:18.827 issued rwts: total=5160,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:18.827 00:34:18.827 Run status group 0 (all jobs): 00:34:18.827 READ: bw=79.7MiB/s (83.6MB/s), 18.2MiB/s-22.8MiB/s (19.1MB/s-23.9MB/s), io=79.9MiB (83.8MB), run=1002-1003msec 00:34:18.827 WRITE: bw=85.7MiB/s (89.9MB/s), 19.9MiB/s-24.0MiB/s (20.9MB/s-25.1MB/s), io=86.0MiB (90.2MB), run=1002-1003msec 00:34:18.827 00:34:18.827 Disk stats (read/write): 00:34:18.827 nvme0n1: ios=4816/5120, merge=0/0, ticks=39913/45806, in_queue=85719, util=86.57% 00:34:18.827 nvme0n2: ios=4358/4608, merge=0/0, ticks=16068/12973, in_queue=29041, util=88.69% 00:34:18.827 nvme0n3: ios=3633/4057, merge=0/0, ticks=22981/20835, in_queue=43816, util=95.25% 00:34:18.827 nvme0n4: ios=4495/4608, merge=0/0, ticks=34338/27707, in_queue=62045, util=96.91% 00:34:18.827 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:18.827 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=968203 00:34:18.827 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:18.827 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:18.827 [global] 00:34:18.827 thread=1 00:34:18.827 invalidate=1 00:34:18.827 rw=read 00:34:18.827 time_based=1 00:34:18.827 runtime=10 00:34:18.827 ioengine=libaio 00:34:18.827 direct=1 00:34:18.827 bs=4096 00:34:18.827 iodepth=1 00:34:18.827 norandommap=1 00:34:18.827 numjobs=1 00:34:18.827 00:34:18.827 [job0] 00:34:18.827 filename=/dev/nvme0n1 00:34:18.827 [job1] 00:34:18.827 filename=/dev/nvme0n2 00:34:18.827 [job2] 00:34:18.827 filename=/dev/nvme0n3 00:34:18.827 [job3] 00:34:18.827 filename=/dev/nvme0n4 00:34:18.827 Could not set queue depth (nvme0n1) 00:34:18.827 Could not set queue depth (nvme0n2) 00:34:18.827 Could not set queue depth (nvme0n3) 00:34:18.827 Could not set queue depth (nvme0n4) 00:34:19.406 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:19.406 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:19.406 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:19.406 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:19.406 fio-3.35 00:34:19.406 Starting 4 threads 00:34:21.953 09:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:21.953 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=12533760, buflen=4096 00:34:21.953 fio: pid=968610, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:21.953 09:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:22.214 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=5767168, buflen=4096 00:34:22.214 fio: pid=968599, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:22.214 09:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:22.214 09:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:22.476 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11010048, buflen=4096 00:34:22.476 fio: pid=968553, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:22.476 09:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:22.476 09:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:22.476 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12275712, buflen=4096 00:34:22.476 fio: pid=968572, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:22.476 09:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:22.476 09:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:22.476 00:34:22.476 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=968553: Wed Nov 20 09:19:47 2024 00:34:22.476 read: IOPS=925, BW=3699KiB/s (3787kB/s)(10.5MiB/2907msec) 00:34:22.476 slat (usec): min=2, max=26250, avg=53.23, stdev=773.80 00:34:22.476 clat (usec): min=308, max=1342, avg=1012.19, stdev=111.95 00:34:22.476 lat (usec): min=315, max=27422, avg=1065.43, stdev=786.94 00:34:22.476 clat percentiles (usec): 00:34:22.476 | 1.00th=[ 644], 5.00th=[ 807], 10.00th=[ 881], 20.00th=[ 938], 00:34:22.476 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1029], 60.00th=[ 1057], 00:34:22.476 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:34:22.476 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1303], 99.95th=[ 1319], 00:34:22.476 | 99.99th=[ 1336] 00:34:22.476 bw ( KiB/s): min= 3768, max= 4000, per=29.20%, avg=3848.00, stdev=97.32, samples=5 00:34:22.476 iops : min= 942, max= 1000, avg=962.00, stdev=24.33, samples=5 00:34:22.476 lat (usec) : 500=0.15%, 750=2.27%, 1000=38.04% 00:34:22.476 lat (msec) : 2=59.50% 00:34:22.476 cpu : usr=0.69%, sys=3.03%, ctx=2693, majf=0, minf=1 00:34:22.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.476 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.476 issued rwts: total=2689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.476 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:22.476 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=968572: Wed Nov 20 09:19:47 2024 00:34:22.476 read: IOPS=972, BW=3890KiB/s (3983kB/s)(11.7MiB/3082msec) 00:34:22.476 slat (usec): min=6, max=21721, avg=58.82, stdev=748.21 00:34:22.476 clat (usec): min=271, max=4680, avg=955.90, stdev=146.42 00:34:22.476 lat (usec): min=278, max=22798, avg=1014.73, stdev=763.49 00:34:22.476 clat percentiles (usec): 00:34:22.476 | 1.00th=[ 586], 5.00th=[ 734], 10.00th=[ 799], 20.00th=[ 865], 00:34:22.477 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[ 963], 60.00th=[ 988], 00:34:22.477 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1123], 95.00th=[ 1156], 00:34:22.477 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1319], 99.95th=[ 1336], 00:34:22.477 | 99.99th=[ 4686] 00:34:22.477 bw ( KiB/s): min= 3524, max= 4176, per=29.92%, avg=3943.33, stdev=233.52, samples=6 00:34:22.477 iops : min= 881, max= 1044, avg=985.83, stdev=58.38, samples=6 00:34:22.477 lat (usec) : 500=0.33%, 750=6.04%, 1000=58.14% 00:34:22.477 lat (msec) : 2=35.42%, 10=0.03% 00:34:22.477 cpu : usr=1.69%, sys=3.41%, ctx=3004, majf=0, minf=2 00:34:22.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.477 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.477 issued rwts: total=2998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:22.477 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=968599: Wed Nov 20 09:19:47 2024 00:34:22.477 read: IOPS=517, BW=2069KiB/s (2119kB/s)(5632KiB/2722msec) 00:34:22.477 slat (usec): min=6, max=21484, avg=52.58, stdev=713.21 00:34:22.477 clat (usec): min=238, max=42154, avg=1856.04, stdev=6299.93 00:34:22.477 lat (usec): min=245, max=42180, avg=1908.65, stdev=6335.89 00:34:22.477 clat percentiles (usec): 00:34:22.477 | 1.00th=[ 474], 5.00th=[ 603], 10.00th=[ 660], 20.00th=[ 734], 00:34:22.477 | 30.00th=[ 791], 40.00th=[ 840], 50.00th=[ 881], 60.00th=[ 922], 00:34:22.477 | 70.00th=[ 963], 80.00th=[ 996], 90.00th=[ 1074], 95.00th=[ 1172], 00:34:22.477 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:22.477 | 99.99th=[42206] 00:34:22.477 bw ( KiB/s): min= 96, max= 4544, per=15.43%, avg=2033.60, stdev=2198.78, samples=5 00:34:22.477 iops : min= 24, max= 1136, avg=508.40, stdev=549.70, samples=5 00:34:22.477 lat (usec) : 250=0.07%, 500=1.21%, 750=20.72%, 1000=58.20% 00:34:22.477 lat (msec) : 2=17.32%, 50=2.41% 00:34:22.477 cpu : usr=0.85%, sys=2.02%, ctx=1412, majf=0, minf=2 00:34:22.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.477 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.477 issued rwts: total=1409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:22.477 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=968610: Wed Nov 20 09:19:47 2024 00:34:22.477 read: IOPS=1207, BW=4828KiB/s (4944kB/s)(12.0MiB/2535msec) 00:34:22.477 slat (nsec): min=6659, max=60908, avg=24721.58, stdev=5140.42 00:34:22.477 clat (usec): min=282, max=1220, avg=788.96, stdev=143.70 00:34:22.477 lat (usec): min=290, max=1245, avg=813.68, stdev=144.18 00:34:22.477 clat percentiles (usec): 00:34:22.477 | 1.00th=[ 453], 5.00th=[ 545], 10.00th=[ 594], 20.00th=[ 660], 00:34:22.477 | 30.00th=[ 709], 40.00th=[ 758], 50.00th=[ 807], 60.00th=[ 848], 00:34:22.477 | 70.00th=[ 873], 80.00th=[ 914], 90.00th=[ 971], 95.00th=[ 1004], 00:34:22.477 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[ 1156], 99.95th=[ 1172], 00:34:22.477 | 99.99th=[ 1221] 00:34:22.477 bw ( KiB/s): min= 4720, max= 5056, per=37.08%, avg=4886.40, stdev=123.08, samples=5 00:34:22.477 iops : min= 1180, max= 1264, avg=1221.60, stdev=30.77, samples=5 00:34:22.477 lat (usec) : 500=2.65%, 750=35.54%, 1000=56.55% 00:34:22.477 lat (msec) : 2=5.23% 00:34:22.477 cpu : usr=1.07%, sys=3.71%, ctx=3063, majf=0, minf=2 00:34:22.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.477 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.477 issued rwts: total=3061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:22.477 00:34:22.477 Run status group 0 (all jobs): 00:34:22.477 READ: bw=12.9MiB/s (13.5MB/s), 2069KiB/s-4828KiB/s (2119kB/s-4944kB/s), io=39.7MiB (41.6MB), run=2535-3082msec 00:34:22.477 00:34:22.477 Disk stats (read/write): 00:34:22.477 nvme0n1: ios=2605/0, merge=0/0, ticks=2530/0, in_queue=2530, util=90.78% 00:34:22.477 nvme0n2: ios=2991/0, merge=0/0, ticks=2666/0, in_queue=2666, util=92.07% 00:34:22.477 nvme0n3: ios=1272/0, merge=0/0, ticks=2353/0, in_queue=2353, util=95.64% 00:34:22.477 nvme0n4: ios=2785/0, merge=0/0, ticks=2132/0, in_queue=2132, util=95.98% 00:34:22.737 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:22.737 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:22.998 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:22.998 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:22.998 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:22.998 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:23.259 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:23.259 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 968203 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:23.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:23.520 nvmf hotplug test: fio failed as expected 00:34:23.520 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:23.781 rmmod nvme_tcp 00:34:23.781 rmmod nvme_fabrics 00:34:23.781 rmmod nvme_keyring 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 965008 ']' 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 965008 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 965008 ']' 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 965008 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.781 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 965008 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 965008' 00:34:24.043 killing process with pid 965008 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 965008 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 965008 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:24.043 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:26.589 00:34:26.589 real 0m27.856s 00:34:26.589 user 2m19.333s 00:34:26.589 sys 0m12.580s 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:26.589 ************************************ 00:34:26.589 END TEST nvmf_fio_target 00:34:26.589 ************************************ 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:26.589 ************************************ 00:34:26.589 START TEST nvmf_bdevio 00:34:26.589 ************************************ 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:26.589 * Looking for test storage... 00:34:26.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:26.589 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:26.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.590 --rc genhtml_branch_coverage=1 00:34:26.590 --rc genhtml_function_coverage=1 00:34:26.590 --rc genhtml_legend=1 00:34:26.590 --rc geninfo_all_blocks=1 00:34:26.590 --rc geninfo_unexecuted_blocks=1 00:34:26.590 00:34:26.590 ' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:26.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.590 --rc genhtml_branch_coverage=1 00:34:26.590 --rc genhtml_function_coverage=1 00:34:26.590 --rc genhtml_legend=1 00:34:26.590 --rc geninfo_all_blocks=1 00:34:26.590 --rc geninfo_unexecuted_blocks=1 00:34:26.590 00:34:26.590 ' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:26.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.590 --rc genhtml_branch_coverage=1 00:34:26.590 --rc genhtml_function_coverage=1 00:34:26.590 --rc genhtml_legend=1 00:34:26.590 --rc geninfo_all_blocks=1 00:34:26.590 --rc geninfo_unexecuted_blocks=1 00:34:26.590 00:34:26.590 ' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:26.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.590 --rc genhtml_branch_coverage=1 00:34:26.590 --rc genhtml_function_coverage=1 00:34:26.590 --rc genhtml_legend=1 00:34:26.590 --rc geninfo_all_blocks=1 00:34:26.590 --rc geninfo_unexecuted_blocks=1 00:34:26.590 00:34:26.590 ' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:26.590 09:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:34.848 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:34.848 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:34.848 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:34.848 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:34.848 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:34.849 09:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:34.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:34.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:34:34.849 00:34:34.849 --- 10.0.0.2 ping statistics --- 00:34:34.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.849 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:34.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:34.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:34:34.849 00:34:34.849 --- 10.0.0.1 ping statistics --- 00:34:34.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.849 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=973595 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 973595 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 973595 ']' 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.849 09:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:34.849 [2024-11-20 09:19:59.370291] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:34.849 [2024-11-20 09:19:59.371465] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:34:34.849 [2024-11-20 09:19:59.371518] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.849 [2024-11-20 09:19:59.473474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:34.849 [2024-11-20 09:19:59.526546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:34.849 [2024-11-20 09:19:59.526602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:34.849 [2024-11-20 09:19:59.526610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:34.849 [2024-11-20 09:19:59.526618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:34.849 [2024-11-20 09:19:59.526625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:34.849 [2024-11-20 09:19:59.529075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:34.849 [2024-11-20 09:19:59.529228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:34.849 [2024-11-20 09:19:59.529364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:34.849 [2024-11-20 09:19:59.529364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:34.849 [2024-11-20 09:19:59.606938] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:34.849 [2024-11-20 09:19:59.608123] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:34.849 [2024-11-20 09:19:59.608171] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:34.849 [2024-11-20 09:19:59.608582] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:34.849 [2024-11-20 09:19:59.608634] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:34.849 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:34.849 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:34.849 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:34.849 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:34.850 [2024-11-20 09:20:00.246630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:34.850 Malloc0 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:34.850 [2024-11-20 09:20:00.338950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:34.850 { 00:34:34.850 "params": { 00:34:34.850 "name": "Nvme$subsystem", 00:34:34.850 "trtype": "$TEST_TRANSPORT", 00:34:34.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:34.850 "adrfam": "ipv4", 00:34:34.850 "trsvcid": "$NVMF_PORT", 00:34:34.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:34.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:34.850 "hdgst": ${hdgst:-false}, 00:34:34.850 "ddgst": ${ddgst:-false} 00:34:34.850 }, 00:34:34.850 "method": "bdev_nvme_attach_controller" 00:34:34.850 } 00:34:34.850 EOF 00:34:34.850 )") 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:34.850 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:34.850 "params": { 00:34:34.850 "name": "Nvme1", 00:34:34.850 "trtype": "tcp", 00:34:34.850 "traddr": "10.0.0.2", 00:34:34.850 "adrfam": "ipv4", 00:34:34.850 "trsvcid": "4420", 00:34:34.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:34.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:34.850 "hdgst": false, 00:34:34.850 "ddgst": false 00:34:34.850 }, 00:34:34.850 "method": "bdev_nvme_attach_controller" 00:34:34.850 }' 00:34:35.111 [2024-11-20 09:20:00.404592] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:34:35.111 [2024-11-20 09:20:00.404667] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973746 ] 00:34:35.111 [2024-11-20 09:20:00.499260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:35.111 [2024-11-20 09:20:00.557969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.111 [2024-11-20 09:20:00.558135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.111 [2024-11-20 09:20:00.558136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.372 I/O targets: 00:34:35.372 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:35.372 00:34:35.372 00:34:35.372 CUnit - A unit testing framework for C - Version 2.1-3 00:34:35.372 http://cunit.sourceforge.net/ 00:34:35.372 00:34:35.372 00:34:35.372 Suite: bdevio tests on: Nvme1n1 00:34:35.372 Test: blockdev write read block ...passed 00:34:35.372 Test: blockdev write zeroes read block ...passed 00:34:35.372 Test: blockdev write zeroes read no split ...passed 00:34:35.633 Test: blockdev write zeroes read split ...passed 00:34:35.633 Test: blockdev write zeroes read split partial ...passed 00:34:35.633 Test: blockdev reset ...[2024-11-20 09:20:00.921183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:35.633 [2024-11-20 09:20:00.921295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1774970 (9): Bad file descriptor 00:34:35.633 [2024-11-20 09:20:00.975778] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:35.633 passed 00:34:35.633 Test: blockdev write read 8 blocks ...passed 00:34:35.633 Test: blockdev write read size > 128k ...passed 00:34:35.633 Test: blockdev write read invalid size ...passed 00:34:35.633 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:35.633 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:35.633 Test: blockdev write read max offset ...passed 00:34:35.633 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:35.633 Test: blockdev writev readv 8 blocks ...passed 00:34:35.895 Test: blockdev writev readv 30 x 1block ...passed 00:34:35.895 Test: blockdev writev readv block ...passed 00:34:35.895 Test: blockdev writev readv size > 128k ...passed 00:34:35.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:35.895 Test: blockdev comparev and writev ...[2024-11-20 09:20:01.284554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:35.895 [2024-11-20 09:20:01.284604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.895 [2024-11-20 09:20:01.284620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:35.895 [2024-11-20 09:20:01.284629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:35.895 [2024-11-20 09:20:01.285184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:35.895 [2024-11-20 09:20:01.285196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:35.895 [2024-11-20 09:20:01.285210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:35.895 [2024-11-20 09:20:01.285218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:35.895 [2024-11-20 09:20:01.285831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:35.895 [2024-11-20 09:20:01.285843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:35.895 [2024-11-20 09:20:01.285857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:35.895 [2024-11-20 09:20:01.285864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:35.895 [2024-11-20 09:20:01.286467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:35.895 [2024-11-20 09:20:01.286478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:35.895 [2024-11-20 09:20:01.286493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:35.895 [2024-11-20 09:20:01.286500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:35.895 passed 00:34:35.895 Test: blockdev nvme passthru rw ...passed 00:34:35.895 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:20:01.371833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:35.895 [2024-11-20 09:20:01.371849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:35.895 [2024-11-20 09:20:01.372202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:35.895 [2024-11-20 09:20:01.372215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:35.895 [2024-11-20 09:20:01.372498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:35.895 [2024-11-20 09:20:01.372511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:35.895 [2024-11-20 09:20:01.372808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:35.895 [2024-11-20 09:20:01.372819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:35.895 passed 00:34:35.895 Test: blockdev nvme admin passthru ...passed 00:34:36.156 Test: blockdev copy ...passed 00:34:36.156 00:34:36.156 Run Summary: Type Total Ran Passed Failed Inactive 00:34:36.156 suites 1 1 n/a 0 0 00:34:36.157 tests 23 23 23 0 0 00:34:36.157 asserts 152 152 152 0 n/a 00:34:36.157 00:34:36.157 Elapsed time = 1.352 seconds 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:36.157 rmmod nvme_tcp 00:34:36.157 rmmod nvme_fabrics 00:34:36.157 rmmod nvme_keyring 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 973595 ']' 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 973595 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 973595 ']' 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 973595 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:36.157 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973595 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973595' 00:34:36.418 killing process with pid 973595 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 973595 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 973595 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:36.418 09:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:38.964 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:38.964 00:34:38.964 real 0m12.409s 00:34:38.964 user 0m10.120s 00:34:38.964 sys 0m6.698s 00:34:38.964 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:38.964 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.964 ************************************ 00:34:38.964 END TEST nvmf_bdevio 00:34:38.964 ************************************ 00:34:38.964 09:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:38.964 00:34:38.964 real 5m0.588s 00:34:38.964 user 10m15.548s 00:34:38.964 sys 2m6.237s 00:34:38.964 09:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:38.964 09:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:38.964 ************************************ 00:34:38.964 END TEST nvmf_target_core_interrupt_mode 00:34:38.964 ************************************ 00:34:38.964 09:20:04 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:38.964 09:20:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:38.964 09:20:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:38.964 09:20:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:38.964 ************************************ 00:34:38.964 START TEST nvmf_interrupt 00:34:38.964 ************************************ 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:38.964 * Looking for test storage... 00:34:38.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:38.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.964 --rc genhtml_branch_coverage=1 00:34:38.964 --rc genhtml_function_coverage=1 00:34:38.964 --rc genhtml_legend=1 00:34:38.964 --rc geninfo_all_blocks=1 00:34:38.964 --rc geninfo_unexecuted_blocks=1 00:34:38.964 00:34:38.964 ' 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:38.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.964 --rc genhtml_branch_coverage=1 00:34:38.964 --rc genhtml_function_coverage=1 00:34:38.964 --rc genhtml_legend=1 00:34:38.964 --rc geninfo_all_blocks=1 00:34:38.964 --rc geninfo_unexecuted_blocks=1 00:34:38.964 00:34:38.964 ' 00:34:38.964 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:38.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.964 --rc genhtml_branch_coverage=1 00:34:38.964 --rc genhtml_function_coverage=1 00:34:38.964 --rc genhtml_legend=1 00:34:38.964 --rc geninfo_all_blocks=1 00:34:38.964 --rc geninfo_unexecuted_blocks=1 00:34:38.965 00:34:38.965 ' 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:38.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.965 --rc genhtml_branch_coverage=1 00:34:38.965 --rc genhtml_function_coverage=1 00:34:38.965 --rc genhtml_legend=1 00:34:38.965 --rc geninfo_all_blocks=1 00:34:38.965 --rc geninfo_unexecuted_blocks=1 00:34:38.965 00:34:38.965 ' 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:38.965 09:20:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:47.102 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:47.102 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:47.102 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:47.102 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.102 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:47.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:34:47.103 00:34:47.103 --- 10.0.0.2 ping statistics --- 00:34:47.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.103 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:34:47.103 00:34:47.103 --- 10.0.0.1 ping statistics --- 00:34:47.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.103 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=978160 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 978160 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 978160 ']' 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.103 09:20:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:47.103 [2024-11-20 09:20:11.899909] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:47.103 [2024-11-20 09:20:11.901056] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:34:47.103 [2024-11-20 09:20:11.901109] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.103 [2024-11-20 09:20:11.999226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:47.103 [2024-11-20 09:20:12.050697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.103 [2024-11-20 09:20:12.050747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.103 [2024-11-20 09:20:12.050756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.103 [2024-11-20 09:20:12.050763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.103 [2024-11-20 09:20:12.050770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.103 [2024-11-20 09:20:12.052320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.103 [2024-11-20 09:20:12.052368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.103 [2024-11-20 09:20:12.128808] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:47.103 [2024-11-20 09:20:12.129371] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:47.103 [2024-11-20 09:20:12.129684] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:47.364 5000+0 records in 00:34:47.364 5000+0 records out 00:34:47.364 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0189286 s, 541 MB/s 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:47.364 AIO0 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:47.364 [2024-11-20 09:20:12.821374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:47.364 [2024-11-20 09:20:12.865912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 978160 0 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 978160 0 idle 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=978160 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 978160 -w 256 00:34:47.364 09:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:47.624 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 978160 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.31 reactor_0' 00:34:47.624 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 978160 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.31 reactor_0 00:34:47.624 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:47.624 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:47.624 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:47.624 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 978160 1 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 978160 1 idle 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=978160 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 978160 -w 256 00:34:47.625 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 978206 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 978206 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=978470 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 978160 0 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 978160 0 busy 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=978160 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 978160 -w 256 00:34:47.885 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 978160 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0' 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 978160 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 978160 1 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 978160 1 busy 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=978160 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 978160 -w 256 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 978206 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1' 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 978206 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:48.146 09:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 978470 00:34:58.140 Initializing NVMe Controllers 00:34:58.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:58.140 Controller IO queue size 256, less than required. 00:34:58.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:58.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:58.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:58.140 Initialization complete. Launching workers. 00:34:58.140 ======================================================== 00:34:58.140 Latency(us) 00:34:58.140 Device Information : IOPS MiB/s Average min max 00:34:58.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19069.95 74.49 13428.80 4625.55 31458.41 00:34:58.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19714.95 77.01 12986.45 8253.87 27808.17 00:34:58.140 ======================================================== 00:34:58.140 Total : 38784.90 151.50 13203.94 4625.55 31458.41 00:34:58.140 00:34:58.140 [2024-11-20 09:20:23.390155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022720 is same with the state(6) to be set 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 978160 0 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 978160 0 idle 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=978160 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 978160 -w 256 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 978160 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0' 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 978160 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 978160 1 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 978160 1 idle 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=978160 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 978160 -w 256 00:34:58.140 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:58.400 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 978206 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:34:58.400 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 978206 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:34:58.400 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:58.400 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:58.400 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:58.400 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:58.400 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:58.400 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:58.400 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:58.400 09:20:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:58.400 09:20:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:58.971 09:20:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:58.971 09:20:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:34:58.971 09:20:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:58.971 09:20:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:58.971 09:20:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:01.510 09:20:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:01.510 09:20:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 978160 0 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 978160 0 idle 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=978160 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 978160 -w 256 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 978160 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.69 reactor_0' 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 978160 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.69 reactor_0 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 978160 1 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 978160 1 idle 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=978160 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 978160 -w 256 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 978206 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 978206 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:01.511 09:20:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:01.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:01.772 rmmod nvme_tcp 00:35:01.772 rmmod nvme_fabrics 00:35:01.772 rmmod nvme_keyring 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 978160 ']' 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 978160 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 978160 ']' 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 978160 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 978160 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 978160' 00:35:01.772 killing process with pid 978160 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 978160 00:35:01.772 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 978160 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:02.033 09:20:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.576 09:20:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:04.576 00:35:04.576 real 0m25.360s 00:35:04.576 user 0m40.409s 00:35:04.576 sys 0m9.595s 00:35:04.576 09:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:04.576 09:20:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:04.576 ************************************ 00:35:04.576 END TEST nvmf_interrupt 00:35:04.576 ************************************ 00:35:04.576 00:35:04.576 real 30m10.128s 00:35:04.576 user 61m19.915s 00:35:04.576 sys 10m20.782s 00:35:04.576 09:20:29 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:04.576 09:20:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.576 ************************************ 00:35:04.576 END TEST nvmf_tcp 00:35:04.577 ************************************ 00:35:04.577 09:20:29 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:04.577 09:20:29 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:04.577 09:20:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:04.577 09:20:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:04.577 09:20:29 -- common/autotest_common.sh@10 -- # set +x 00:35:04.577 ************************************ 00:35:04.577 START TEST spdkcli_nvmf_tcp 00:35:04.577 ************************************ 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:04.577 * Looking for test storage... 00:35:04.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:04.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.577 --rc genhtml_branch_coverage=1 00:35:04.577 --rc genhtml_function_coverage=1 00:35:04.577 --rc genhtml_legend=1 00:35:04.577 --rc geninfo_all_blocks=1 00:35:04.577 --rc geninfo_unexecuted_blocks=1 00:35:04.577 00:35:04.577 ' 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:04.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.577 --rc genhtml_branch_coverage=1 00:35:04.577 --rc genhtml_function_coverage=1 00:35:04.577 --rc genhtml_legend=1 00:35:04.577 --rc geninfo_all_blocks=1 00:35:04.577 --rc geninfo_unexecuted_blocks=1 00:35:04.577 00:35:04.577 ' 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:04.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.577 --rc genhtml_branch_coverage=1 00:35:04.577 --rc genhtml_function_coverage=1 00:35:04.577 --rc genhtml_legend=1 00:35:04.577 --rc geninfo_all_blocks=1 00:35:04.577 --rc geninfo_unexecuted_blocks=1 00:35:04.577 00:35:04.577 ' 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:04.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.577 --rc genhtml_branch_coverage=1 00:35:04.577 --rc genhtml_function_coverage=1 00:35:04.577 --rc genhtml_legend=1 00:35:04.577 --rc geninfo_all_blocks=1 00:35:04.577 --rc geninfo_unexecuted_blocks=1 00:35:04.577 00:35:04.577 ' 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.577 09:20:29 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=981710 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 981710 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 981710 ']' 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:04.578 09:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.578 [2024-11-20 09:20:29.894106] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:35:04.578 [2024-11-20 09:20:29.894170] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981710 ] 00:35:04.578 [2024-11-20 09:20:29.984847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:04.578 [2024-11-20 09:20:30.040995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.578 [2024-11-20 09:20:30.041000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.520 09:20:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.520 09:20:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:05.520 09:20:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:05.520 09:20:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:05.520 09:20:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:05.520 09:20:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:05.520 09:20:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:05.520 09:20:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:05.520 09:20:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:05.520 09:20:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:05.520 09:20:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:05.520 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:05.520 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:05.520 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:05.520 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:05.520 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:05.520 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:05.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:05.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:05.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:05.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:05.520 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:05.520 ' 00:35:08.064 [2024-11-20 09:20:33.438328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:09.446 [2024-11-20 09:20:34.802482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:11.985 [2024-11-20 09:20:37.321443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:14.529 [2024-11-20 09:20:39.539733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:15.910 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:15.910 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:15.910 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:15.910 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:15.910 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:15.910 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:15.910 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:15.910 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:15.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:15.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:15.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:15.910 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:15.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:15.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:15.910 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:15.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:15.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:15.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:15.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:15.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:15.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:15.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:15.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:15.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:15.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:15.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:15.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:15.911 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:15.911 09:20:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:15.911 09:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:15.911 09:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:15.911 09:20:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:15.911 09:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:15.911 09:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:15.911 09:20:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:15.911 09:20:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:16.480 09:20:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:16.480 09:20:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:16.480 09:20:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:16.480 09:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:16.480 09:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:16.480 09:20:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:16.480 09:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:16.480 09:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:16.480 09:20:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:16.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:16.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:16.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:16.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:16.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:16.480 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:16.480 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:16.480 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:16.480 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:16.480 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:16.480 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:16.480 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:16.480 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:16.480 ' 00:35:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:23.059 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:23.059 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:23.059 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:23.059 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:23.059 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:23.059 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:23.059 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:23.059 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 981710 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 981710 ']' 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 981710 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 981710 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 981710' 00:35:23.059 killing process with pid 981710 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 981710 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 981710 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 981710 ']' 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 981710 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 981710 ']' 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 981710 00:35:23.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (981710) - No such process 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 981710 is not found' 00:35:23.059 Process with pid 981710 is not found 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:23.059 00:35:23.059 real 0m18.087s 00:35:23.059 user 0m40.117s 00:35:23.059 sys 0m0.869s 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:23.059 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:23.059 ************************************ 00:35:23.059 END TEST spdkcli_nvmf_tcp 00:35:23.059 ************************************ 00:35:23.059 09:20:47 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:23.059 09:20:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:23.059 09:20:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:23.059 09:20:47 -- common/autotest_common.sh@10 -- # set +x 00:35:23.059 ************************************ 00:35:23.059 START TEST nvmf_identify_passthru 00:35:23.059 ************************************ 00:35:23.059 09:20:47 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:23.059 * Looking for test storage... 00:35:23.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:23.059 09:20:47 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:23.059 09:20:47 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:35:23.059 09:20:47 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:23.059 09:20:47 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:23.059 09:20:47 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:23.059 09:20:47 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:23.059 09:20:47 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:23.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.059 --rc genhtml_branch_coverage=1 00:35:23.059 --rc genhtml_function_coverage=1 00:35:23.060 --rc genhtml_legend=1 00:35:23.060 --rc geninfo_all_blocks=1 00:35:23.060 --rc geninfo_unexecuted_blocks=1 00:35:23.060 00:35:23.060 ' 00:35:23.060 09:20:47 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:23.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.060 --rc genhtml_branch_coverage=1 00:35:23.060 --rc genhtml_function_coverage=1 00:35:23.060 --rc genhtml_legend=1 00:35:23.060 --rc geninfo_all_blocks=1 00:35:23.060 --rc geninfo_unexecuted_blocks=1 00:35:23.060 00:35:23.060 ' 00:35:23.060 09:20:47 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:23.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.060 --rc genhtml_branch_coverage=1 00:35:23.060 --rc genhtml_function_coverage=1 00:35:23.060 --rc genhtml_legend=1 00:35:23.060 --rc geninfo_all_blocks=1 00:35:23.060 --rc geninfo_unexecuted_blocks=1 00:35:23.060 00:35:23.060 ' 00:35:23.060 09:20:47 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:23.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.060 --rc genhtml_branch_coverage=1 00:35:23.060 --rc genhtml_function_coverage=1 00:35:23.060 --rc genhtml_legend=1 00:35:23.060 --rc geninfo_all_blocks=1 00:35:23.060 --rc geninfo_unexecuted_blocks=1 00:35:23.060 00:35:23.060 ' 00:35:23.060 09:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:23.060 09:20:47 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:23.060 09:20:47 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:23.060 09:20:47 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:23.060 09:20:47 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:23.060 09:20:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.060 09:20:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.060 09:20:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.060 09:20:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:23.060 09:20:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:23.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:23.060 09:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:23.060 09:20:47 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:23.060 09:20:47 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:23.060 09:20:47 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:23.060 09:20:47 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:23.060 09:20:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.060 09:20:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.060 09:20:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.060 09:20:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:23.060 09:20:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.060 09:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:23.060 09:20:47 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:23.060 09:20:48 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:23.060 09:20:48 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:23.060 09:20:48 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:23.060 09:20:48 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.060 09:20:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:23.060 09:20:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.060 09:20:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:23.060 09:20:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:23.060 09:20:48 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:23.060 09:20:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:29.636 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:29.637 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:29.637 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:29.637 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:29.637 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:29.637 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:29.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:29.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:35:29.897 00:35:29.897 --- 10.0.0.2 ping statistics --- 00:35:29.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.897 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:29.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:29.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:35:29.897 00:35:29.897 --- 10.0.0.1 ping statistics --- 00:35:29.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.897 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:29.897 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:30.157 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:30.157 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:30.157 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:30.157 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:30.157 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:30.158 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:30.158 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:30.158 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:30.158 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:30.158 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:30.158 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:30.158 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:30.158 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:30.158 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:30.158 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:35:30.158 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:35:30.158 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:30.158 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:30.158 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:30.158 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:30.158 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:30.727 09:20:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:30.727 09:20:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:30.727 09:20:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:30.727 09:20:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:31.299 09:20:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:31.299 09:20:56 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:31.299 09:20:56 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.299 09:20:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:31.299 09:20:56 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:31.299 09:20:56 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.299 09:20:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:31.299 09:20:56 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=989063 00:35:31.299 09:20:56 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:31.299 09:20:56 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:31.299 09:20:56 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 989063 00:35:31.299 09:20:56 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 989063 ']' 00:35:31.299 09:20:56 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.299 09:20:56 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.299 09:20:56 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.299 09:20:56 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.299 09:20:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:31.299 [2024-11-20 09:20:56.704098] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:35:31.299 [2024-11-20 09:20:56.704153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.299 [2024-11-20 09:20:56.797344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:31.559 [2024-11-20 09:20:56.835388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:31.559 [2024-11-20 09:20:56.835421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:31.559 [2024-11-20 09:20:56.835429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:31.559 [2024-11-20 09:20:56.835436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:31.559 [2024-11-20 09:20:56.835442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:31.559 [2024-11-20 09:20:56.836954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.559 [2024-11-20 09:20:56.837104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:31.559 [2024-11-20 09:20:56.837299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.559 [2024-11-20 09:20:56.837299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:32.129 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.129 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:32.129 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:32.129 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.129 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.129 INFO: Log level set to 20 00:35:32.129 INFO: Requests: 00:35:32.129 { 00:35:32.129 "jsonrpc": "2.0", 00:35:32.129 "method": "nvmf_set_config", 00:35:32.129 "id": 1, 00:35:32.129 "params": { 00:35:32.129 "admin_cmd_passthru": { 00:35:32.129 "identify_ctrlr": true 00:35:32.129 } 00:35:32.129 } 00:35:32.129 } 00:35:32.129 00:35:32.129 INFO: response: 00:35:32.129 { 00:35:32.129 "jsonrpc": "2.0", 00:35:32.129 "id": 1, 00:35:32.129 "result": true 00:35:32.129 } 00:35:32.129 00:35:32.129 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.129 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:32.129 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.129 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.129 INFO: Setting log level to 20 00:35:32.129 INFO: Setting log level to 20 00:35:32.129 INFO: Log level set to 20 00:35:32.129 INFO: Log level set to 20 00:35:32.130 INFO: Requests: 00:35:32.130 { 00:35:32.130 "jsonrpc": "2.0", 00:35:32.130 "method": "framework_start_init", 00:35:32.130 "id": 1 00:35:32.130 } 00:35:32.130 00:35:32.130 INFO: Requests: 00:35:32.130 { 00:35:32.130 "jsonrpc": "2.0", 00:35:32.130 "method": "framework_start_init", 00:35:32.130 "id": 1 00:35:32.130 } 00:35:32.130 00:35:32.130 [2024-11-20 09:20:57.553787] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:32.130 INFO: response: 00:35:32.130 { 00:35:32.130 "jsonrpc": "2.0", 00:35:32.130 "id": 1, 00:35:32.130 "result": true 00:35:32.130 } 00:35:32.130 00:35:32.130 INFO: response: 00:35:32.130 { 00:35:32.130 "jsonrpc": "2.0", 00:35:32.130 "id": 1, 00:35:32.130 "result": true 00:35:32.130 } 00:35:32.130 00:35:32.130 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.130 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:32.130 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.130 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.130 INFO: Setting log level to 40 00:35:32.130 INFO: Setting log level to 40 00:35:32.130 INFO: Setting log level to 40 00:35:32.130 [2024-11-20 09:20:57.567136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.130 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.130 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:32.130 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:32.130 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.130 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:32.130 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.130 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.700 Nvme0n1 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.700 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.700 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.700 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.700 [2024-11-20 09:20:57.960442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.700 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.700 [ 00:35:32.700 { 00:35:32.700 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:32.700 "subtype": "Discovery", 00:35:32.700 "listen_addresses": [], 00:35:32.700 "allow_any_host": true, 00:35:32.700 "hosts": [] 00:35:32.700 }, 00:35:32.700 { 00:35:32.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:32.700 "subtype": "NVMe", 00:35:32.700 "listen_addresses": [ 00:35:32.700 { 00:35:32.700 "trtype": "TCP", 00:35:32.700 "adrfam": "IPv4", 00:35:32.700 "traddr": "10.0.0.2", 00:35:32.700 "trsvcid": "4420" 00:35:32.700 } 00:35:32.700 ], 00:35:32.700 "allow_any_host": true, 00:35:32.700 "hosts": [], 00:35:32.700 "serial_number": "SPDK00000000000001", 00:35:32.700 "model_number": "SPDK bdev Controller", 00:35:32.700 "max_namespaces": 1, 00:35:32.700 "min_cntlid": 1, 00:35:32.700 "max_cntlid": 65519, 00:35:32.700 "namespaces": [ 00:35:32.700 { 00:35:32.700 "nsid": 1, 00:35:32.700 "bdev_name": "Nvme0n1", 00:35:32.700 "name": "Nvme0n1", 00:35:32.700 "nguid": "36344730526054870025384500000044", 00:35:32.700 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:32.700 } 00:35:32.700 ] 00:35:32.700 } 00:35:32.700 ] 00:35:32.700 09:20:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.700 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:32.700 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:32.700 09:20:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:32.700 09:20:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:32.700 09:20:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:32.700 09:20:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:32.700 09:20:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:32.961 09:20:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:32.961 09:20:58 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:32.961 09:20:58 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:32.961 09:20:58 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.961 09:20:58 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:32.961 09:20:58 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:32.961 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:32.961 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:32.961 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:32.961 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:32.961 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:32.961 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:32.961 rmmod nvme_tcp 00:35:32.961 rmmod nvme_fabrics 00:35:32.961 rmmod nvme_keyring 00:35:32.961 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:32.961 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:32.961 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:32.961 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 989063 ']' 00:35:32.961 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 989063 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 989063 ']' 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 989063 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 989063 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 989063' 00:35:32.961 killing process with pid 989063 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 989063 00:35:32.961 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 989063 00:35:33.221 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:33.221 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:33.221 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:33.221 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:33.221 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:33.221 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:33.221 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:33.221 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:33.221 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:33.221 09:20:58 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:33.222 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:33.222 09:20:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.769 09:21:00 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:35.769 00:35:35.769 real 0m13.054s 00:35:35.769 user 0m9.881s 00:35:35.769 sys 0m6.606s 00:35:35.769 09:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:35.769 09:21:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.769 ************************************ 00:35:35.769 END TEST nvmf_identify_passthru 00:35:35.769 ************************************ 00:35:35.769 09:21:00 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:35.769 09:21:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:35.769 09:21:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:35.769 09:21:00 -- common/autotest_common.sh@10 -- # set +x 00:35:35.769 ************************************ 00:35:35.769 START TEST nvmf_dif 00:35:35.769 ************************************ 00:35:35.769 09:21:00 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:35.769 * Looking for test storage... 00:35:35.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:35.769 09:21:01 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:35.769 09:21:01 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:35:35.769 09:21:01 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:35.769 09:21:01 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:35.769 09:21:01 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:35.770 09:21:01 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:35.770 09:21:01 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:35.770 09:21:01 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:35.770 09:21:01 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:35.770 09:21:01 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:35.770 09:21:01 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:35.770 09:21:01 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:35.770 09:21:01 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:35.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.770 --rc genhtml_branch_coverage=1 00:35:35.770 --rc genhtml_function_coverage=1 00:35:35.770 --rc genhtml_legend=1 00:35:35.770 --rc geninfo_all_blocks=1 00:35:35.770 --rc geninfo_unexecuted_blocks=1 00:35:35.770 00:35:35.770 ' 00:35:35.770 09:21:01 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:35.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.770 --rc genhtml_branch_coverage=1 00:35:35.770 --rc genhtml_function_coverage=1 00:35:35.770 --rc genhtml_legend=1 00:35:35.770 --rc geninfo_all_blocks=1 00:35:35.770 --rc geninfo_unexecuted_blocks=1 00:35:35.770 00:35:35.770 ' 00:35:35.770 09:21:01 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:35.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.770 --rc genhtml_branch_coverage=1 00:35:35.770 --rc genhtml_function_coverage=1 00:35:35.770 --rc genhtml_legend=1 00:35:35.770 --rc geninfo_all_blocks=1 00:35:35.770 --rc geninfo_unexecuted_blocks=1 00:35:35.770 00:35:35.770 ' 00:35:35.770 09:21:01 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:35.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.770 --rc genhtml_branch_coverage=1 00:35:35.770 --rc genhtml_function_coverage=1 00:35:35.770 --rc genhtml_legend=1 00:35:35.770 --rc geninfo_all_blocks=1 00:35:35.770 --rc geninfo_unexecuted_blocks=1 00:35:35.770 00:35:35.770 ' 00:35:35.770 09:21:01 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:35.770 09:21:01 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:35.770 09:21:01 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:35.770 09:21:01 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:35.770 09:21:01 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:35.770 09:21:01 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.770 09:21:01 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.770 09:21:01 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.770 09:21:01 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:35.770 09:21:01 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:35.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:35.770 09:21:01 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:35.770 09:21:01 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:35.770 09:21:01 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:35.770 09:21:01 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:35.770 09:21:01 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.770 09:21:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:35.770 09:21:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:35.770 09:21:01 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:35.770 09:21:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:44.041 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:44.041 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:44.041 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:44.041 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:44.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:44.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:35:44.041 00:35:44.041 --- 10.0.0.2 ping statistics --- 00:35:44.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.041 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:44.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:44.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:35:44.041 00:35:44.041 --- 10.0.0.1 ping statistics --- 00:35:44.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.041 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:44.041 09:21:08 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:46.619 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:46.619 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:46.619 09:21:12 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:46.619 09:21:12 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:46.619 09:21:12 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:46.619 09:21:12 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:46.619 09:21:12 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:46.619 09:21:12 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:46.619 09:21:12 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:46.619 09:21:12 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:46.619 09:21:12 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:46.619 09:21:12 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:46.619 09:21:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:46.619 09:21:12 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=995810 00:35:46.619 09:21:12 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 995810 00:35:46.619 09:21:12 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:46.619 09:21:12 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 995810 ']' 00:35:46.619 09:21:12 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.619 09:21:12 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.619 09:21:12 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.619 09:21:12 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.619 09:21:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:46.880 [2024-11-20 09:21:12.150679] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:35:46.880 [2024-11-20 09:21:12.150726] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:46.880 [2024-11-20 09:21:12.242709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.880 [2024-11-20 09:21:12.277761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.880 [2024-11-20 09:21:12.277792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.880 [2024-11-20 09:21:12.277801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.880 [2024-11-20 09:21:12.277807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.880 [2024-11-20 09:21:12.277813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.880 [2024-11-20 09:21:12.278380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.452 09:21:12 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.452 09:21:12 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:35:47.452 09:21:12 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:47.452 09:21:12 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.452 09:21:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:47.452 09:21:12 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.452 09:21:12 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:47.452 09:21:12 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:47.452 09:21:12 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.452 09:21:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:47.713 [2024-11-20 09:21:12.979043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.713 09:21:12 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.713 09:21:12 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:47.713 09:21:12 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:47.713 09:21:12 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:47.713 09:21:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:47.713 ************************************ 00:35:47.713 START TEST fio_dif_1_default 00:35:47.713 ************************************ 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:47.713 bdev_null0 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:47.713 [2024-11-20 09:21:13.067483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:47.713 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:47.714 { 00:35:47.714 "params": { 00:35:47.714 "name": "Nvme$subsystem", 00:35:47.714 "trtype": "$TEST_TRANSPORT", 00:35:47.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.714 "adrfam": "ipv4", 00:35:47.714 "trsvcid": "$NVMF_PORT", 00:35:47.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.714 "hdgst": ${hdgst:-false}, 00:35:47.714 "ddgst": ${ddgst:-false} 00:35:47.714 }, 00:35:47.714 "method": "bdev_nvme_attach_controller" 00:35:47.714 } 00:35:47.714 EOF 00:35:47.714 )") 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:47.714 "params": { 00:35:47.714 "name": "Nvme0", 00:35:47.714 "trtype": "tcp", 00:35:47.714 "traddr": "10.0.0.2", 00:35:47.714 "adrfam": "ipv4", 00:35:47.714 "trsvcid": "4420", 00:35:47.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:47.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:47.714 "hdgst": false, 00:35:47.714 "ddgst": false 00:35:47.714 }, 00:35:47.714 "method": "bdev_nvme_attach_controller" 00:35:47.714 }' 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:47.714 09:21:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.975 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:47.975 fio-3.35 00:35:47.975 Starting 1 thread 00:36:00.206 00:36:00.206 filename0: (groupid=0, jobs=1): err= 0: pid=996343: Wed Nov 20 09:21:24 2024 00:36:00.206 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10021msec) 00:36:00.206 slat (nsec): min=5406, max=59524, avg=6243.63, stdev=2300.43 00:36:00.206 clat (usec): min=806, max=44505, avg=40882.76, stdev=2584.43 00:36:00.206 lat (usec): min=812, max=44538, avg=40889.01, stdev=2584.59 00:36:00.206 clat percentiles (usec): 00:36:00.206 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:00.206 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:00.206 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:36:00.206 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:36:00.206 | 99.99th=[44303] 00:36:00.206 bw ( KiB/s): min= 384, max= 416, per=99.70%, avg=390.40, stdev=13.13, samples=20 00:36:00.206 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:36:00.206 lat (usec) : 1000=0.41% 00:36:00.206 lat (msec) : 50=99.59% 00:36:00.206 cpu : usr=93.35%, sys=6.41%, ctx=19, majf=0, minf=245 00:36:00.206 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:00.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.206 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:00.206 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:00.206 00:36:00.206 Run status group 0 (all jobs): 00:36:00.206 READ: bw=391KiB/s (401kB/s), 391KiB/s-391KiB/s (401kB/s-401kB/s), io=3920KiB (4014kB), run=10021-10021msec 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.206 00:36:00.206 real 0m11.267s 00:36:00.206 user 0m16.228s 00:36:00.206 sys 0m1.100s 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:00.206 ************************************ 00:36:00.206 END TEST fio_dif_1_default 00:36:00.206 ************************************ 00:36:00.206 09:21:24 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:00.206 09:21:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:00.206 09:21:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.206 09:21:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:00.206 ************************************ 00:36:00.206 START TEST fio_dif_1_multi_subsystems 00:36:00.206 ************************************ 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:00.206 bdev_null0 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:00.206 [2024-11-20 09:21:24.409676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.206 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:00.206 bdev_null1 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:00.207 { 00:36:00.207 "params": { 00:36:00.207 "name": "Nvme$subsystem", 00:36:00.207 "trtype": "$TEST_TRANSPORT", 00:36:00.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:00.207 "adrfam": "ipv4", 00:36:00.207 "trsvcid": "$NVMF_PORT", 00:36:00.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:00.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:00.207 "hdgst": ${hdgst:-false}, 00:36:00.207 "ddgst": ${ddgst:-false} 00:36:00.207 }, 00:36:00.207 "method": "bdev_nvme_attach_controller" 00:36:00.207 } 00:36:00.207 EOF 00:36:00.207 )") 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:00.207 { 00:36:00.207 "params": { 00:36:00.207 "name": "Nvme$subsystem", 00:36:00.207 "trtype": "$TEST_TRANSPORT", 00:36:00.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:00.207 "adrfam": "ipv4", 00:36:00.207 "trsvcid": "$NVMF_PORT", 00:36:00.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:00.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:00.207 "hdgst": ${hdgst:-false}, 00:36:00.207 "ddgst": ${ddgst:-false} 00:36:00.207 }, 00:36:00.207 "method": "bdev_nvme_attach_controller" 00:36:00.207 } 00:36:00.207 EOF 00:36:00.207 )") 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:00.207 "params": { 00:36:00.207 "name": "Nvme0", 00:36:00.207 "trtype": "tcp", 00:36:00.207 "traddr": "10.0.0.2", 00:36:00.207 "adrfam": "ipv4", 00:36:00.207 "trsvcid": "4420", 00:36:00.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:00.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:00.207 "hdgst": false, 00:36:00.207 "ddgst": false 00:36:00.207 }, 00:36:00.207 "method": "bdev_nvme_attach_controller" 00:36:00.207 },{ 00:36:00.207 "params": { 00:36:00.207 "name": "Nvme1", 00:36:00.207 "trtype": "tcp", 00:36:00.207 "traddr": "10.0.0.2", 00:36:00.207 "adrfam": "ipv4", 00:36:00.207 "trsvcid": "4420", 00:36:00.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:00.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:00.207 "hdgst": false, 00:36:00.207 "ddgst": false 00:36:00.207 }, 00:36:00.207 "method": "bdev_nvme_attach_controller" 00:36:00.207 }' 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:00.207 09:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.207 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:00.207 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:00.207 fio-3.35 00:36:00.208 Starting 2 threads 00:36:10.219 00:36:10.219 filename0: (groupid=0, jobs=1): err= 0: pid=998584: Wed Nov 20 09:21:35 2024 00:36:10.219 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:36:10.219 slat (nsec): min=5384, max=49255, avg=6602.28, stdev=2147.71 00:36:10.219 clat (usec): min=40854, max=42502, avg=40992.25, stdev=119.21 00:36:10.219 lat (usec): min=40859, max=42538, avg=40998.85, stdev=119.81 00:36:10.219 clat percentiles (usec): 00:36:10.219 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:10.219 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:10.219 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:10.219 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:10.219 | 99.99th=[42730] 00:36:10.219 bw ( KiB/s): min= 384, max= 416, per=49.65%, avg=388.80, stdev=11.72, samples=20 00:36:10.219 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:10.219 lat (msec) : 50=100.00% 00:36:10.219 cpu : usr=95.67%, sys=4.12%, ctx=36, majf=0, minf=203 00:36:10.219 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.219 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.219 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:10.219 filename1: (groupid=0, jobs=1): err= 0: pid=998586: Wed Nov 20 09:21:35 2024 00:36:10.219 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10012msec) 00:36:10.219 slat (nsec): min=5391, max=36150, avg=6407.66, stdev=1555.52 00:36:10.219 clat (usec): min=840, max=42965, avg=40845.74, stdev=2569.56 00:36:10.219 lat (usec): min=845, max=42974, avg=40852.14, stdev=2569.65 00:36:10.219 clat percentiles (usec): 00:36:10.219 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:10.219 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:10.219 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:10.219 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:10.219 | 99.99th=[42730] 00:36:10.220 bw ( KiB/s): min= 384, max= 416, per=49.91%, avg=390.40, stdev=13.13, samples=20 00:36:10.220 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:36:10.220 lat (usec) : 1000=0.41% 00:36:10.220 lat (msec) : 50=99.59% 00:36:10.220 cpu : usr=95.50%, sys=4.29%, ctx=61, majf=0, minf=90 00:36:10.220 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.220 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.220 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:10.220 00:36:10.220 Run status group 0 (all jobs): 00:36:10.220 READ: bw=781KiB/s (800kB/s), 390KiB/s-392KiB/s (399kB/s-401kB/s), io=7824KiB (8012kB), run=10007-10012msec 00:36:10.479 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:10.479 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:10.479 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:10.479 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:10.479 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:10.479 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:10.479 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.479 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.480 00:36:10.480 real 0m11.555s 00:36:10.480 user 0m31.492s 00:36:10.480 sys 0m1.253s 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:10.480 09:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:10.480 ************************************ 00:36:10.480 END TEST fio_dif_1_multi_subsystems 00:36:10.480 ************************************ 00:36:10.480 09:21:35 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:10.480 09:21:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:10.480 09:21:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:10.480 09:21:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:10.480 ************************************ 00:36:10.480 START TEST fio_dif_rand_params 00:36:10.480 ************************************ 00:36:10.480 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:10.480 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:10.480 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:10.480 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:10.480 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:10.480 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:10.480 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:10.480 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.740 bdev_null0 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.740 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.741 [2024-11-20 09:21:36.049226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:10.741 { 00:36:10.741 "params": { 00:36:10.741 "name": "Nvme$subsystem", 00:36:10.741 "trtype": "$TEST_TRANSPORT", 00:36:10.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:10.741 "adrfam": "ipv4", 00:36:10.741 "trsvcid": "$NVMF_PORT", 00:36:10.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:10.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:10.741 "hdgst": ${hdgst:-false}, 00:36:10.741 "ddgst": ${ddgst:-false} 00:36:10.741 }, 00:36:10.741 "method": "bdev_nvme_attach_controller" 00:36:10.741 } 00:36:10.741 EOF 00:36:10.741 )") 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:10.741 "params": { 00:36:10.741 "name": "Nvme0", 00:36:10.741 "trtype": "tcp", 00:36:10.741 "traddr": "10.0.0.2", 00:36:10.741 "adrfam": "ipv4", 00:36:10.741 "trsvcid": "4420", 00:36:10.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:10.741 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:10.741 "hdgst": false, 00:36:10.741 "ddgst": false 00:36:10.741 }, 00:36:10.741 "method": "bdev_nvme_attach_controller" 00:36:10.741 }' 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:10.741 09:21:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.028 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:11.028 ... 00:36:11.028 fio-3.35 00:36:11.028 Starting 3 threads 00:36:17.614 00:36:17.614 filename0: (groupid=0, jobs=1): err= 0: pid=1001053: Wed Nov 20 09:21:42 2024 00:36:17.614 read: IOPS=352, BW=44.0MiB/s (46.2MB/s)(222MiB/5046msec) 00:36:17.614 slat (nsec): min=5436, max=35533, avg=8075.03, stdev=1896.74 00:36:17.614 clat (usec): min=4391, max=87013, avg=8479.73, stdev=6133.44 00:36:17.614 lat (usec): min=4399, max=87022, avg=8487.80, stdev=6133.41 00:36:17.614 clat percentiles (usec): 00:36:17.614 | 1.00th=[ 4752], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6521], 00:36:17.614 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 7898], 00:36:17.614 | 70.00th=[ 8291], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[10290], 00:36:17.614 | 99.00th=[46924], 99.50th=[49021], 99.90th=[86508], 99.95th=[86508], 00:36:17.614 | 99.99th=[86508] 00:36:17.614 bw ( KiB/s): min=32768, max=51456, per=43.20%, avg=45465.60, stdev=5537.07, samples=10 00:36:17.614 iops : min= 256, max= 402, avg=355.20, stdev=43.26, samples=10 00:36:17.614 lat (msec) : 10=91.73%, 20=6.47%, 50=1.63%, 100=0.17% 00:36:17.614 cpu : usr=94.17%, sys=5.59%, ctx=7, majf=0, minf=96 00:36:17.614 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.614 issued rwts: total=1778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.614 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:17.614 filename0: (groupid=0, jobs=1): err= 0: pid=1001054: Wed Nov 20 09:21:42 2024 00:36:17.614 read: IOPS=140, BW=17.6MiB/s (18.4MB/s)(88.6MiB/5043msec) 00:36:17.614 slat (nsec): min=5579, max=60352, avg=8611.92, stdev=2640.56 00:36:17.614 clat (msec): min=4, max=130, avg=21.32, stdev=22.13 00:36:17.614 lat (msec): min=4, max=131, avg=21.33, stdev=22.13 00:36:17.614 clat percentiles (msec): 00:36:17.614 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:36:17.614 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:36:17.614 | 70.00th=[ 12], 80.00th=[ 49], 90.00th=[ 50], 95.00th=[ 51], 00:36:17.614 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 131], 99.95th=[ 131], 00:36:17.614 | 99.99th=[ 131] 00:36:17.614 bw ( KiB/s): min= 8704, max=30464, per=17.17%, avg=18073.60, stdev=5898.75, samples=10 00:36:17.614 iops : min= 68, max= 238, avg=141.20, stdev=46.08, samples=10 00:36:17.614 lat (msec) : 10=64.46%, 20=7.05%, 50=20.31%, 100=8.04%, 250=0.14% 00:36:17.614 cpu : usr=96.09%, sys=3.69%, ctx=9, majf=0, minf=143 00:36:17.614 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.614 issued rwts: total=709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.614 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:17.614 filename0: (groupid=0, jobs=1): err= 0: pid=1001055: Wed Nov 20 09:21:42 2024 00:36:17.614 read: IOPS=331, BW=41.4MiB/s (43.4MB/s)(208MiB/5016msec) 00:36:17.614 slat (nsec): min=5428, max=46009, avg=8040.98, stdev=1937.21 00:36:17.614 clat (usec): min=4695, max=91451, avg=9043.11, stdev=8094.64 00:36:17.614 lat (usec): min=4703, max=91459, avg=9051.15, stdev=8094.85 00:36:17.614 clat percentiles (usec): 00:36:17.614 | 1.00th=[ 5014], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6456], 00:36:17.614 | 30.00th=[ 6915], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7898], 00:36:17.614 | 70.00th=[ 8291], 80.00th=[ 8848], 90.00th=[ 9634], 95.00th=[10552], 00:36:17.614 | 99.00th=[48497], 99.50th=[49021], 99.90th=[89654], 99.95th=[91751], 00:36:17.614 | 99.99th=[91751] 00:36:17.614 bw ( KiB/s): min=23040, max=50432, per=40.35%, avg=42470.40, stdev=9542.43, samples=10 00:36:17.614 iops : min= 180, max= 394, avg=331.80, stdev=74.55, samples=10 00:36:17.614 lat (msec) : 10=91.52%, 20=5.11%, 50=3.07%, 100=0.30% 00:36:17.614 cpu : usr=94.20%, sys=5.56%, ctx=7, majf=0, minf=64 00:36:17.614 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.614 issued rwts: total=1662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.614 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:17.614 00:36:17.614 Run status group 0 (all jobs): 00:36:17.614 READ: bw=103MiB/s (108MB/s), 17.6MiB/s-44.0MiB/s (18.4MB/s-46.2MB/s), io=519MiB (544MB), run=5016-5046msec 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.614 bdev_null0 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.614 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.615 [2024-11-20 09:21:42.361518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.615 bdev_null1 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.615 bdev_null2 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:17.615 { 00:36:17.615 "params": { 00:36:17.615 "name": "Nvme$subsystem", 00:36:17.615 "trtype": "$TEST_TRANSPORT", 00:36:17.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.615 "adrfam": "ipv4", 00:36:17.615 "trsvcid": "$NVMF_PORT", 00:36:17.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.615 "hdgst": ${hdgst:-false}, 00:36:17.615 "ddgst": ${ddgst:-false} 00:36:17.615 }, 00:36:17.615 "method": "bdev_nvme_attach_controller" 00:36:17.615 } 00:36:17.615 EOF 00:36:17.615 )") 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:17.615 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:17.615 { 00:36:17.615 "params": { 00:36:17.615 "name": "Nvme$subsystem", 00:36:17.615 "trtype": "$TEST_TRANSPORT", 00:36:17.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.615 "adrfam": "ipv4", 00:36:17.615 "trsvcid": "$NVMF_PORT", 00:36:17.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.615 "hdgst": ${hdgst:-false}, 00:36:17.616 "ddgst": ${ddgst:-false} 00:36:17.616 }, 00:36:17.616 "method": "bdev_nvme_attach_controller" 00:36:17.616 } 00:36:17.616 EOF 00:36:17.616 )") 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:17.616 { 00:36:17.616 "params": { 00:36:17.616 "name": "Nvme$subsystem", 00:36:17.616 "trtype": "$TEST_TRANSPORT", 00:36:17.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.616 "adrfam": "ipv4", 00:36:17.616 "trsvcid": "$NVMF_PORT", 00:36:17.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.616 "hdgst": ${hdgst:-false}, 00:36:17.616 "ddgst": ${ddgst:-false} 00:36:17.616 }, 00:36:17.616 "method": "bdev_nvme_attach_controller" 00:36:17.616 } 00:36:17.616 EOF 00:36:17.616 )") 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:17.616 "params": { 00:36:17.616 "name": "Nvme0", 00:36:17.616 "trtype": "tcp", 00:36:17.616 "traddr": "10.0.0.2", 00:36:17.616 "adrfam": "ipv4", 00:36:17.616 "trsvcid": "4420", 00:36:17.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:17.616 "hdgst": false, 00:36:17.616 "ddgst": false 00:36:17.616 }, 00:36:17.616 "method": "bdev_nvme_attach_controller" 00:36:17.616 },{ 00:36:17.616 "params": { 00:36:17.616 "name": "Nvme1", 00:36:17.616 "trtype": "tcp", 00:36:17.616 "traddr": "10.0.0.2", 00:36:17.616 "adrfam": "ipv4", 00:36:17.616 "trsvcid": "4420", 00:36:17.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:17.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:17.616 "hdgst": false, 00:36:17.616 "ddgst": false 00:36:17.616 }, 00:36:17.616 "method": "bdev_nvme_attach_controller" 00:36:17.616 },{ 00:36:17.616 "params": { 00:36:17.616 "name": "Nvme2", 00:36:17.616 "trtype": "tcp", 00:36:17.616 "traddr": "10.0.0.2", 00:36:17.616 "adrfam": "ipv4", 00:36:17.616 "trsvcid": "4420", 00:36:17.616 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:17.616 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:17.616 "hdgst": false, 00:36:17.616 "ddgst": false 00:36:17.616 }, 00:36:17.616 "method": "bdev_nvme_attach_controller" 00:36:17.616 }' 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:17.616 09:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.616 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:17.616 ... 00:36:17.616 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:17.616 ... 00:36:17.616 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:17.616 ... 00:36:17.616 fio-3.35 00:36:17.616 Starting 24 threads 00:36:29.855 00:36:29.855 filename0: (groupid=0, jobs=1): err= 0: pid=1002367: Wed Nov 20 09:21:54 2024 00:36:29.855 read: IOPS=664, BW=2659KiB/s (2723kB/s)(26.0MiB/10013msec) 00:36:29.855 slat (usec): min=5, max=107, avg=18.15, stdev=14.60 00:36:29.855 clat (usec): min=13834, max=31033, avg=23901.40, stdev=1255.16 00:36:29.855 lat (usec): min=13843, max=31044, avg=23919.55, stdev=1252.85 00:36:29.855 clat percentiles (usec): 00:36:29.855 | 1.00th=[20317], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:36:29.855 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:36:29.855 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25297], 95.00th=[25560], 00:36:29.855 | 99.00th=[26346], 99.50th=[27132], 99.90th=[27657], 99.95th=[27657], 00:36:29.855 | 99.99th=[31065] 00:36:29.855 bw ( KiB/s): min= 2560, max= 2688, per=3.99%, avg=2654.32, stdev=57.91, samples=19 00:36:29.855 iops : min= 640, max= 672, avg=663.58, stdev=14.48, samples=19 00:36:29.855 lat (msec) : 20=0.99%, 50=99.01% 00:36:29.855 cpu : usr=99.03%, sys=0.64%, ctx=15, majf=0, minf=32 00:36:29.855 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:29.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.855 filename0: (groupid=0, jobs=1): err= 0: pid=1002368: Wed Nov 20 09:21:54 2024 00:36:29.855 read: IOPS=677, BW=2711KiB/s (2777kB/s)(26.5MiB/10005msec) 00:36:29.855 slat (usec): min=4, max=107, avg=16.70, stdev=14.65 00:36:29.855 clat (usec): min=5056, max=40867, avg=23467.24, stdev=3326.42 00:36:29.855 lat (usec): min=5062, max=40881, avg=23483.94, stdev=3327.43 00:36:29.855 clat percentiles (usec): 00:36:29.855 | 1.00th=[14222], 5.00th=[16188], 10.00th=[19530], 20.00th=[22676], 00:36:29.855 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:29.855 | 70.00th=[24511], 80.00th=[25035], 90.00th=[25822], 95.00th=[27132], 00:36:29.855 | 99.00th=[33817], 99.50th=[36439], 99.90th=[40633], 99.95th=[40633], 00:36:29.855 | 99.99th=[40633] 00:36:29.855 bw ( KiB/s): min= 2560, max= 3024, per=4.07%, avg=2702.58, stdev=124.75, samples=19 00:36:29.855 iops : min= 640, max= 756, avg=675.63, stdev=31.20, samples=19 00:36:29.855 lat (msec) : 10=0.32%, 20=10.60%, 50=89.07% 00:36:29.855 cpu : usr=98.97%, sys=0.70%, ctx=35, majf=0, minf=25 00:36:29.855 IO depths : 1=4.1%, 2=8.2%, 4=17.8%, 8=60.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:29.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 complete : 0=0.0%, 4=92.1%, 8=2.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 issued rwts: total=6782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.855 filename0: (groupid=0, jobs=1): err= 0: pid=1002369: Wed Nov 20 09:21:54 2024 00:36:29.855 read: IOPS=683, BW=2733KiB/s (2798kB/s)(26.7MiB/10012msec) 00:36:29.855 slat (nsec): min=5411, max=72051, avg=13456.74, stdev=10042.61 00:36:29.855 clat (usec): min=11387, max=41873, avg=23316.79, stdev=2834.16 00:36:29.855 lat (usec): min=11395, max=41895, avg=23330.25, stdev=2835.53 00:36:29.855 clat percentiles (usec): 00:36:29.855 | 1.00th=[13960], 5.00th=[16712], 10.00th=[19268], 20.00th=[22938], 00:36:29.855 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:29.855 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25297], 95.00th=[25822], 00:36:29.855 | 99.00th=[31065], 99.50th=[34341], 99.90th=[35914], 99.95th=[41681], 00:36:29.855 | 99.99th=[41681] 00:36:29.855 bw ( KiB/s): min= 2560, max= 3200, per=4.11%, avg=2731.79, stdev=154.56, samples=19 00:36:29.855 iops : min= 640, max= 800, avg=682.95, stdev=38.64, samples=19 00:36:29.855 lat (msec) : 20=11.35%, 50=88.65% 00:36:29.855 cpu : usr=98.83%, sys=0.85%, ctx=17, majf=0, minf=25 00:36:29.855 IO depths : 1=4.4%, 2=8.8%, 4=19.3%, 8=59.2%, 16=8.3%, 32=0.0%, >=64=0.0% 00:36:29.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 issued rwts: total=6840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.855 filename0: (groupid=0, jobs=1): err= 0: pid=1002370: Wed Nov 20 09:21:54 2024 00:36:29.855 read: IOPS=680, BW=2721KiB/s (2786kB/s)(26.6MiB/10011msec) 00:36:29.855 slat (nsec): min=5416, max=91410, avg=18084.25, stdev=12586.99 00:36:29.855 clat (usec): min=7596, max=40194, avg=23371.34, stdev=2855.81 00:36:29.855 lat (usec): min=7606, max=40204, avg=23389.43, stdev=2857.70 00:36:29.855 clat percentiles (usec): 00:36:29.855 | 1.00th=[13566], 5.00th=[17171], 10.00th=[20841], 20.00th=[22938], 00:36:29.855 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:29.855 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25297], 95.00th=[25822], 00:36:29.855 | 99.00th=[32637], 99.50th=[35390], 99.90th=[38536], 99.95th=[38536], 00:36:29.855 | 99.99th=[40109] 00:36:29.855 bw ( KiB/s): min= 2560, max= 3040, per=4.09%, avg=2719.16, stdev=137.27, samples=19 00:36:29.855 iops : min= 640, max= 760, avg=679.79, stdev=34.32, samples=19 00:36:29.855 lat (msec) : 10=0.47%, 20=8.74%, 50=90.79% 00:36:29.855 cpu : usr=98.89%, sys=0.78%, ctx=16, majf=0, minf=37 00:36:29.855 IO depths : 1=5.2%, 2=10.5%, 4=22.1%, 8=54.8%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:29.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 issued rwts: total=6810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.855 filename0: (groupid=0, jobs=1): err= 0: pid=1002371: Wed Nov 20 09:21:54 2024 00:36:29.855 read: IOPS=726, BW=2907KiB/s (2977kB/s)(28.4MiB/10006msec) 00:36:29.855 slat (usec): min=5, max=174, avg=19.56, stdev=19.01 00:36:29.855 clat (usec): min=4849, max=40904, avg=21848.52, stdev=4523.53 00:36:29.855 lat (usec): min=4863, max=40932, avg=21868.08, stdev=4527.51 00:36:29.855 clat percentiles (usec): 00:36:29.855 | 1.00th=[ 7898], 5.00th=[14746], 10.00th=[15795], 20.00th=[16909], 00:36:29.855 | 30.00th=[20841], 40.00th=[22938], 50.00th=[23200], 60.00th=[23725], 00:36:29.855 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25035], 95.00th=[27132], 00:36:29.855 | 99.00th=[34866], 99.50th=[38536], 99.90th=[40633], 99.95th=[40633], 00:36:29.855 | 99.99th=[41157] 00:36:29.855 bw ( KiB/s): min= 2560, max= 3376, per=4.40%, avg=2920.42, stdev=228.36, samples=19 00:36:29.855 iops : min= 640, max= 844, avg=730.11, stdev=57.09, samples=19 00:36:29.855 lat (msec) : 10=1.24%, 20=27.37%, 50=71.40% 00:36:29.855 cpu : usr=99.02%, sys=0.68%, ctx=57, majf=0, minf=29 00:36:29.855 IO depths : 1=2.8%, 2=6.2%, 4=16.7%, 8=64.1%, 16=10.1%, 32=0.0%, >=64=0.0% 00:36:29.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 issued rwts: total=7272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.855 filename0: (groupid=0, jobs=1): err= 0: pid=1002372: Wed Nov 20 09:21:54 2024 00:36:29.855 read: IOPS=780, BW=3123KiB/s (3197kB/s)(30.5MiB/10015msec) 00:36:29.855 slat (nsec): min=5406, max=88821, avg=9217.73, stdev=7624.29 00:36:29.855 clat (usec): min=1420, max=40204, avg=20434.10, stdev=5073.43 00:36:29.855 lat (usec): min=1435, max=40212, avg=20443.32, stdev=5074.42 00:36:29.855 clat percentiles (usec): 00:36:29.855 | 1.00th=[ 3851], 5.00th=[12649], 10.00th=[14746], 20.00th=[16057], 00:36:29.855 | 30.00th=[17171], 40.00th=[19268], 50.00th=[21890], 60.00th=[23200], 00:36:29.855 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25035], 95.00th=[26608], 00:36:29.855 | 99.00th=[31851], 99.50th=[35390], 99.90th=[38536], 99.95th=[40109], 00:36:29.855 | 99.99th=[40109] 00:36:29.855 bw ( KiB/s): min= 2816, max= 4016, per=4.71%, avg=3131.79, stdev=269.42, samples=19 00:36:29.855 iops : min= 704, max= 1004, avg=782.95, stdev=67.35, samples=19 00:36:29.855 lat (msec) : 2=0.28%, 4=0.77%, 10=1.82%, 20=40.28%, 50=56.86% 00:36:29.855 cpu : usr=98.72%, sys=0.96%, ctx=15, majf=0, minf=58 00:36:29.855 IO depths : 1=1.3%, 2=2.6%, 4=9.7%, 8=74.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:36:29.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 complete : 0=0.0%, 4=90.0%, 8=5.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 issued rwts: total=7818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.855 filename0: (groupid=0, jobs=1): err= 0: pid=1002373: Wed Nov 20 09:21:54 2024 00:36:29.855 read: IOPS=688, BW=2755KiB/s (2822kB/s)(26.9MiB/10008msec) 00:36:29.855 slat (nsec): min=4911, max=99370, avg=16490.32, stdev=12899.16 00:36:29.855 clat (usec): min=10200, max=44918, avg=23104.15, stdev=3922.06 00:36:29.855 lat (usec): min=10220, max=44931, avg=23120.64, stdev=3924.59 00:36:29.855 clat percentiles (usec): 00:36:29.855 | 1.00th=[13435], 5.00th=[15401], 10.00th=[17171], 20.00th=[20579], 00:36:29.855 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:29.855 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26084], 95.00th=[29492], 00:36:29.855 | 99.00th=[33817], 99.50th=[34866], 99.90th=[44827], 99.95th=[44827], 00:36:29.855 | 99.99th=[44827] 00:36:29.855 bw ( KiB/s): min= 2560, max= 3104, per=4.13%, avg=2746.11, stdev=158.23, samples=19 00:36:29.855 iops : min= 640, max= 776, avg=686.53, stdev=39.56, samples=19 00:36:29.855 lat (msec) : 20=17.70%, 50=82.30% 00:36:29.855 cpu : usr=99.03%, sys=0.64%, ctx=15, majf=0, minf=49 00:36:29.855 IO depths : 1=2.7%, 2=5.4%, 4=13.1%, 8=67.7%, 16=11.0%, 32=0.0%, >=64=0.0% 00:36:29.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 complete : 0=0.0%, 4=91.0%, 8=4.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.855 issued rwts: total=6894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.855 filename0: (groupid=0, jobs=1): err= 0: pid=1002374: Wed Nov 20 09:21:54 2024 00:36:29.855 read: IOPS=705, BW=2821KiB/s (2889kB/s)(27.6MiB/10010msec) 00:36:29.855 slat (usec): min=5, max=124, avg=19.28, stdev=16.68 00:36:29.855 clat (usec): min=9952, max=44436, avg=22543.28, stdev=4185.25 00:36:29.856 lat (usec): min=9961, max=44445, avg=22562.56, stdev=4188.25 00:36:29.856 clat percentiles (usec): 00:36:29.856 | 1.00th=[13042], 5.00th=[15139], 10.00th=[15926], 20.00th=[19006], 00:36:29.856 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23462], 60.00th=[23725], 00:36:29.856 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[28705], 00:36:29.856 | 99.00th=[35390], 99.50th=[36963], 99.90th=[40109], 99.95th=[42206], 00:36:29.856 | 99.99th=[44303] 00:36:29.856 bw ( KiB/s): min= 2288, max= 3184, per=4.24%, avg=2819.37, stdev=251.62, samples=19 00:36:29.856 iops : min= 572, max= 796, avg=704.84, stdev=62.90, samples=19 00:36:29.856 lat (msec) : 10=0.08%, 20=22.62%, 50=77.29% 00:36:29.856 cpu : usr=99.04%, sys=0.64%, ctx=16, majf=0, minf=36 00:36:29.856 IO depths : 1=2.3%, 2=4.6%, 4=14.2%, 8=68.3%, 16=10.7%, 32=0.0%, >=64=0.0% 00:36:29.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 complete : 0=0.0%, 4=90.7%, 8=4.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 issued rwts: total=7060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.856 filename1: (groupid=0, jobs=1): err= 0: pid=1002375: Wed Nov 20 09:21:54 2024 00:36:29.856 read: IOPS=687, BW=2751KiB/s (2817kB/s)(26.9MiB/10005msec) 00:36:29.856 slat (nsec): min=5408, max=97869, avg=18434.82, stdev=14222.13 00:36:29.856 clat (usec): min=11395, max=40261, avg=23113.22, stdev=3434.14 00:36:29.856 lat (usec): min=11401, max=40267, avg=23131.65, stdev=3436.51 00:36:29.856 clat percentiles (usec): 00:36:29.856 | 1.00th=[14746], 5.00th=[15926], 10.00th=[17171], 20.00th=[22152], 00:36:29.856 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:36:29.856 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[26608], 00:36:29.856 | 99.00th=[34341], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:36:29.856 | 99.99th=[40109] 00:36:29.856 bw ( KiB/s): min= 2560, max= 3104, per=4.12%, avg=2736.84, stdev=139.33, samples=19 00:36:29.856 iops : min= 640, max= 776, avg=684.21, stdev=34.83, samples=19 00:36:29.856 lat (msec) : 20=15.10%, 50=84.90% 00:36:29.856 cpu : usr=98.89%, sys=0.79%, ctx=20, majf=0, minf=34 00:36:29.856 IO depths : 1=4.2%, 2=8.5%, 4=18.7%, 8=59.9%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:29.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 complete : 0=0.0%, 4=92.4%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 issued rwts: total=6882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.856 filename1: (groupid=0, jobs=1): err= 0: pid=1002377: Wed Nov 20 09:21:54 2024 00:36:29.856 read: IOPS=683, BW=2735KiB/s (2800kB/s)(26.7MiB/10005msec) 00:36:29.856 slat (nsec): min=5407, max=93491, avg=14015.73, stdev=12040.38 00:36:29.856 clat (usec): min=6367, max=43551, avg=23326.63, stdev=4211.46 00:36:29.856 lat (usec): min=6374, max=43567, avg=23340.65, stdev=4212.27 00:36:29.856 clat percentiles (usec): 00:36:29.856 | 1.00th=[13829], 5.00th=[15664], 10.00th=[17695], 20.00th=[20841], 00:36:29.856 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:36:29.856 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26870], 95.00th=[30802], 00:36:29.856 | 99.00th=[38011], 99.50th=[39584], 99.90th=[43254], 99.95th=[43254], 00:36:29.856 | 99.99th=[43779] 00:36:29.856 bw ( KiB/s): min= 2565, max= 2848, per=4.10%, avg=2727.00, stdev=70.62, samples=19 00:36:29.856 iops : min= 641, max= 712, avg=681.74, stdev=17.69, samples=19 00:36:29.856 lat (msec) : 10=0.28%, 20=16.68%, 50=83.04% 00:36:29.856 cpu : usr=99.00%, sys=0.68%, ctx=14, majf=0, minf=62 00:36:29.856 IO depths : 1=0.1%, 2=1.5%, 4=8.0%, 8=75.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:36:29.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 complete : 0=0.0%, 4=90.3%, 8=6.6%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 issued rwts: total=6840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.856 filename1: (groupid=0, jobs=1): err= 0: pid=1002378: Wed Nov 20 09:21:54 2024 00:36:29.856 read: IOPS=669, BW=2679KiB/s (2744kB/s)(26.2MiB/10008msec) 00:36:29.856 slat (nsec): min=5498, max=80688, avg=12975.56, stdev=10072.25 00:36:29.856 clat (usec): min=5102, max=27700, avg=23771.62, stdev=2141.22 00:36:29.856 lat (usec): min=5134, max=27708, avg=23784.60, stdev=2140.77 00:36:29.856 clat percentiles (usec): 00:36:29.856 | 1.00th=[12387], 5.00th=[22414], 10.00th=[22676], 20.00th=[23200], 00:36:29.856 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:36:29.856 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25297], 95.00th=[25560], 00:36:29.856 | 99.00th=[26346], 99.50th=[26870], 99.90th=[27657], 99.95th=[27657], 00:36:29.856 | 99.99th=[27657] 00:36:29.856 bw ( KiB/s): min= 2560, max= 3200, per=4.02%, avg=2674.53, stdev=140.83, samples=19 00:36:29.856 iops : min= 640, max= 800, avg=668.63, stdev=35.21, samples=19 00:36:29.856 lat (msec) : 10=0.92%, 20=0.98%, 50=98.09% 00:36:29.856 cpu : usr=99.03%, sys=0.61%, ctx=60, majf=0, minf=33 00:36:29.856 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:29.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.856 filename1: (groupid=0, jobs=1): err= 0: pid=1002379: Wed Nov 20 09:21:54 2024 00:36:29.856 read: IOPS=698, BW=2796KiB/s (2863kB/s)(27.3MiB/10006msec) 00:36:29.856 slat (usec): min=4, max=101, avg=16.18, stdev=13.51 00:36:29.856 clat (usec): min=6515, max=38602, avg=22754.79, stdev=3622.12 00:36:29.856 lat (usec): min=6522, max=38625, avg=22770.96, stdev=3624.47 00:36:29.856 clat percentiles (usec): 00:36:29.856 | 1.00th=[13435], 5.00th=[15664], 10.00th=[16909], 20.00th=[20841], 00:36:29.856 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:36:29.856 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25297], 95.00th=[26346], 00:36:29.856 | 99.00th=[33817], 99.50th=[34866], 99.90th=[38536], 99.95th=[38536], 00:36:29.856 | 99.99th=[38536] 00:36:29.856 bw ( KiB/s): min= 2560, max= 3200, per=4.20%, avg=2789.89, stdev=180.01, samples=19 00:36:29.856 iops : min= 640, max= 800, avg=697.47, stdev=45.00, samples=19 00:36:29.856 lat (msec) : 10=0.23%, 20=17.76%, 50=82.01% 00:36:29.856 cpu : usr=98.90%, sys=0.78%, ctx=15, majf=0, minf=41 00:36:29.856 IO depths : 1=3.9%, 2=8.0%, 4=17.9%, 8=61.2%, 16=9.0%, 32=0.0%, >=64=0.0% 00:36:29.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 complete : 0=0.0%, 4=92.1%, 8=2.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 issued rwts: total=6994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.856 filename1: (groupid=0, jobs=1): err= 0: pid=1002380: Wed Nov 20 09:21:54 2024 00:36:29.856 read: IOPS=682, BW=2730KiB/s (2795kB/s)(26.7MiB/10005msec) 00:36:29.856 slat (usec): min=5, max=109, avg=16.39, stdev=15.13 00:36:29.856 clat (usec): min=3676, max=49313, avg=23337.63, stdev=4066.93 00:36:29.856 lat (usec): min=3681, max=49330, avg=23354.02, stdev=4068.15 00:36:29.856 clat percentiles (usec): 00:36:29.856 | 1.00th=[13566], 5.00th=[16319], 10.00th=[17957], 20.00th=[21103], 00:36:29.856 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:36:29.856 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26870], 95.00th=[29754], 00:36:29.856 | 99.00th=[35914], 99.50th=[38536], 99.90th=[49021], 99.95th=[49021], 00:36:29.856 | 99.99th=[49546] 00:36:29.856 bw ( KiB/s): min= 2501, max= 2976, per=4.09%, avg=2716.89, stdev=103.19, samples=19 00:36:29.856 iops : min= 625, max= 744, avg=679.21, stdev=25.83, samples=19 00:36:29.856 lat (msec) : 4=0.06%, 10=0.41%, 20=16.86%, 50=82.67% 00:36:29.856 cpu : usr=98.94%, sys=0.74%, ctx=13, majf=0, minf=33 00:36:29.856 IO depths : 1=1.4%, 2=2.9%, 4=8.2%, 8=74.2%, 16=13.3%, 32=0.0%, >=64=0.0% 00:36:29.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 complete : 0=0.0%, 4=90.0%, 8=6.6%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 issued rwts: total=6828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.856 filename1: (groupid=0, jobs=1): err= 0: pid=1002381: Wed Nov 20 09:21:54 2024 00:36:29.856 read: IOPS=717, BW=2868KiB/s (2937kB/s)(28.0MiB/10013msec) 00:36:29.856 slat (usec): min=5, max=131, avg=14.78, stdev=13.99 00:36:29.856 clat (usec): min=5095, max=45358, avg=22202.56, stdev=4650.44 00:36:29.856 lat (usec): min=5130, max=45368, avg=22217.34, stdev=4652.49 00:36:29.856 clat percentiles (usec): 00:36:29.856 | 1.00th=[ 8029], 5.00th=[14091], 10.00th=[15533], 20.00th=[17957], 00:36:29.856 | 30.00th=[21890], 40.00th=[22938], 50.00th=[23462], 60.00th=[23725], 00:36:29.856 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25297], 95.00th=[26608], 00:36:29.856 | 99.00th=[37487], 99.50th=[39584], 99.90th=[41157], 99.95th=[45351], 00:36:29.856 | 99.99th=[45351] 00:36:29.856 bw ( KiB/s): min= 2560, max= 3680, per=4.33%, avg=2874.95, stdev=274.45, samples=19 00:36:29.856 iops : min= 640, max= 920, avg=718.74, stdev=68.61, samples=19 00:36:29.856 lat (msec) : 10=1.41%, 20=24.21%, 50=74.39% 00:36:29.856 cpu : usr=99.06%, sys=0.60%, ctx=57, majf=0, minf=60 00:36:29.856 IO depths : 1=3.2%, 2=6.6%, 4=16.2%, 8=64.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:36:29.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 complete : 0=0.0%, 4=91.7%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.856 issued rwts: total=7180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.856 filename1: (groupid=0, jobs=1): err= 0: pid=1002382: Wed Nov 20 09:21:54 2024 00:36:29.856 read: IOPS=694, BW=2777KiB/s (2843kB/s)(27.1MiB/10006msec) 00:36:29.856 slat (nsec): min=5416, max=95130, avg=17952.34, stdev=12933.19 00:36:29.856 clat (usec): min=5109, max=42120, avg=22907.05, stdev=3862.61 00:36:29.856 lat (usec): min=5115, max=42138, avg=22925.00, stdev=3865.00 00:36:29.856 clat percentiles (usec): 00:36:29.856 | 1.00th=[13304], 5.00th=[15533], 10.00th=[16909], 20.00th=[21103], 00:36:29.856 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:36:29.856 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[27657], 00:36:29.856 | 99.00th=[35914], 99.50th=[37487], 99.90th=[38536], 99.95th=[42206], 00:36:29.856 | 99.99th=[42206] 00:36:29.856 bw ( KiB/s): min= 2560, max= 3104, per=4.17%, avg=2769.95, stdev=156.87, samples=19 00:36:29.856 iops : min= 640, max= 776, avg=692.47, stdev=39.23, samples=19 00:36:29.856 lat (msec) : 10=0.59%, 20=16.90%, 50=82.51% 00:36:29.856 cpu : usr=98.94%, sys=0.74%, ctx=13, majf=0, minf=27 00:36:29.856 IO depths : 1=3.3%, 2=6.9%, 4=16.3%, 8=63.5%, 16=9.9%, 32=0.0%, >=64=0.0% 00:36:29.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 complete : 0=0.0%, 4=91.8%, 8=3.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 issued rwts: total=6946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.857 filename1: (groupid=0, jobs=1): err= 0: pid=1002383: Wed Nov 20 09:21:54 2024 00:36:29.857 read: IOPS=675, BW=2704KiB/s (2769kB/s)(26.4MiB/10004msec) 00:36:29.857 slat (nsec): min=5422, max=97198, avg=16903.78, stdev=12716.77 00:36:29.857 clat (usec): min=9284, max=45023, avg=23541.36, stdev=3332.28 00:36:29.857 lat (usec): min=9295, max=45042, avg=23558.27, stdev=3332.94 00:36:29.857 clat percentiles (usec): 00:36:29.857 | 1.00th=[13566], 5.00th=[16188], 10.00th=[20055], 20.00th=[22938], 00:36:29.857 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:29.857 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25560], 95.00th=[26870], 00:36:29.857 | 99.00th=[33817], 99.50th=[36963], 99.90th=[44827], 99.95th=[44827], 00:36:29.857 | 99.99th=[44827] 00:36:29.857 bw ( KiB/s): min= 2528, max= 2976, per=4.06%, avg=2698.95, stdev=126.55, samples=19 00:36:29.857 iops : min= 632, max= 744, avg=674.74, stdev=31.64, samples=19 00:36:29.857 lat (msec) : 10=0.12%, 20=9.83%, 50=90.05% 00:36:29.857 cpu : usr=98.97%, sys=0.71%, ctx=16, majf=0, minf=30 00:36:29.857 IO depths : 1=4.5%, 2=9.0%, 4=19.4%, 8=58.8%, 16=8.3%, 32=0.0%, >=64=0.0% 00:36:29.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 issued rwts: total=6762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.857 filename2: (groupid=0, jobs=1): err= 0: pid=1002384: Wed Nov 20 09:21:54 2024 00:36:29.857 read: IOPS=730, BW=2923KiB/s (2993kB/s)(28.6MiB/10010msec) 00:36:29.857 slat (usec): min=5, max=112, avg=17.29, stdev=14.59 00:36:29.857 clat (usec): min=4051, max=42653, avg=21762.40, stdev=4935.95 00:36:29.857 lat (usec): min=4069, max=42700, avg=21779.68, stdev=4938.57 00:36:29.857 clat percentiles (usec): 00:36:29.857 | 1.00th=[ 7832], 5.00th=[14353], 10.00th=[15401], 20.00th=[16909], 00:36:29.857 | 30.00th=[19792], 40.00th=[22414], 50.00th=[23200], 60.00th=[23462], 00:36:29.857 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25560], 95.00th=[27919], 00:36:29.857 | 99.00th=[36963], 99.50th=[38536], 99.90th=[40633], 99.95th=[42730], 00:36:29.857 | 99.99th=[42730] 00:36:29.857 bw ( KiB/s): min= 2560, max= 3328, per=4.40%, avg=2921.26, stdev=202.69, samples=19 00:36:29.857 iops : min= 640, max= 832, avg=730.32, stdev=50.67, samples=19 00:36:29.857 lat (msec) : 10=1.37%, 20=29.22%, 50=69.41% 00:36:29.857 cpu : usr=98.92%, sys=0.75%, ctx=35, majf=0, minf=48 00:36:29.857 IO depths : 1=3.0%, 2=6.1%, 4=15.2%, 8=65.8%, 16=9.8%, 32=0.0%, >=64=0.0% 00:36:29.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 complete : 0=0.0%, 4=91.4%, 8=3.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 issued rwts: total=7314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.857 filename2: (groupid=0, jobs=1): err= 0: pid=1002385: Wed Nov 20 09:21:54 2024 00:36:29.857 read: IOPS=680, BW=2720KiB/s (2785kB/s)(26.6MiB/10014msec) 00:36:29.857 slat (usec): min=5, max=134, avg=18.84, stdev=16.12 00:36:29.857 clat (usec): min=7081, max=40943, avg=23370.59, stdev=3087.77 00:36:29.857 lat (usec): min=7090, max=40952, avg=23389.43, stdev=3088.60 00:36:29.857 clat percentiles (usec): 00:36:29.857 | 1.00th=[12649], 5.00th=[16712], 10.00th=[20317], 20.00th=[22938], 00:36:29.857 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:29.857 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25297], 95.00th=[25560], 00:36:29.857 | 99.00th=[35390], 99.50th=[37487], 99.90th=[39584], 99.95th=[41157], 00:36:29.857 | 99.99th=[41157] 00:36:29.857 bw ( KiB/s): min= 2560, max= 3040, per=4.09%, avg=2719.16, stdev=120.62, samples=19 00:36:29.857 iops : min= 640, max= 760, avg=679.79, stdev=30.15, samples=19 00:36:29.857 lat (msec) : 10=0.47%, 20=9.19%, 50=90.34% 00:36:29.857 cpu : usr=99.14%, sys=0.55%, ctx=36, majf=0, minf=44 00:36:29.857 IO depths : 1=5.4%, 2=10.7%, 4=22.3%, 8=54.3%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:29.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 issued rwts: total=6810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.857 filename2: (groupid=0, jobs=1): err= 0: pid=1002386: Wed Nov 20 09:21:54 2024 00:36:29.857 read: IOPS=682, BW=2730KiB/s (2795kB/s)(26.7MiB/10006msec) 00:36:29.857 slat (usec): min=5, max=115, avg=17.65, stdev=15.44 00:36:29.857 clat (usec): min=6245, max=46611, avg=23343.23, stdev=4476.94 00:36:29.857 lat (usec): min=6256, max=46620, avg=23360.88, stdev=4478.23 00:36:29.857 clat percentiles (usec): 00:36:29.857 | 1.00th=[12256], 5.00th=[15533], 10.00th=[17171], 20.00th=[20579], 00:36:29.857 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:36:29.857 | 70.00th=[24511], 80.00th=[25297], 90.00th=[27657], 95.00th=[31327], 00:36:29.857 | 99.00th=[38011], 99.50th=[40633], 99.90th=[43254], 99.95th=[46400], 00:36:29.857 | 99.99th=[46400] 00:36:29.857 bw ( KiB/s): min= 2544, max= 2896, per=4.10%, avg=2723.37, stdev=106.31, samples=19 00:36:29.857 iops : min= 636, max= 724, avg=680.84, stdev=26.58, samples=19 00:36:29.857 lat (msec) : 10=0.37%, 20=17.44%, 50=82.19% 00:36:29.857 cpu : usr=99.13%, sys=0.55%, ctx=27, majf=0, minf=43 00:36:29.857 IO depths : 1=0.7%, 2=1.6%, 4=6.8%, 8=76.6%, 16=14.2%, 32=0.0%, >=64=0.0% 00:36:29.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 complete : 0=0.0%, 4=89.7%, 8=7.0%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 issued rwts: total=6829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.857 filename2: (groupid=0, jobs=1): err= 0: pid=1002388: Wed Nov 20 09:21:54 2024 00:36:29.857 read: IOPS=707, BW=2831KiB/s (2899kB/s)(27.8MiB/10047msec) 00:36:29.857 slat (nsec): min=5405, max=96574, avg=14704.40, stdev=13660.82 00:36:29.857 clat (usec): min=9372, max=52235, avg=22480.89, stdev=4670.65 00:36:29.857 lat (usec): min=9381, max=52242, avg=22495.59, stdev=4672.79 00:36:29.857 clat percentiles (usec): 00:36:29.857 | 1.00th=[13566], 5.00th=[15401], 10.00th=[16319], 20.00th=[17957], 00:36:29.857 | 30.00th=[20579], 40.00th=[22676], 50.00th=[23200], 60.00th=[23462], 00:36:29.857 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26346], 95.00th=[31327], 00:36:29.857 | 99.00th=[35914], 99.50th=[38011], 99.90th=[52167], 99.95th=[52167], 00:36:29.857 | 99.99th=[52167] 00:36:29.857 bw ( KiB/s): min= 2560, max= 3248, per=4.29%, avg=2849.68, stdev=163.68, samples=19 00:36:29.857 iops : min= 640, max= 812, avg=712.42, stdev=40.92, samples=19 00:36:29.857 lat (msec) : 10=0.06%, 20=27.89%, 50=71.88%, 100=0.17% 00:36:29.857 cpu : usr=98.91%, sys=0.77%, ctx=17, majf=0, minf=33 00:36:29.857 IO depths : 1=2.2%, 2=4.4%, 4=12.3%, 8=69.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:36:29.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 complete : 0=0.0%, 4=90.6%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 issued rwts: total=7110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.857 filename2: (groupid=0, jobs=1): err= 0: pid=1002389: Wed Nov 20 09:21:54 2024 00:36:29.857 read: IOPS=691, BW=2764KiB/s (2831kB/s)(27.0MiB/10005msec) 00:36:29.857 slat (nsec): min=4893, max=96237, avg=16023.40, stdev=13735.32 00:36:29.857 clat (usec): min=10430, max=44868, avg=23034.25, stdev=4080.26 00:36:29.857 lat (usec): min=10436, max=44881, avg=23050.27, stdev=4082.40 00:36:29.857 clat percentiles (usec): 00:36:29.857 | 1.00th=[13304], 5.00th=[15270], 10.00th=[16581], 20.00th=[20317], 00:36:29.857 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:36:29.857 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26346], 95.00th=[29230], 00:36:29.857 | 99.00th=[34866], 99.50th=[37487], 99.90th=[41681], 99.95th=[44827], 00:36:29.857 | 99.99th=[44827] 00:36:29.857 bw ( KiB/s): min= 2432, max= 3008, per=4.15%, avg=2757.89, stdev=153.51, samples=19 00:36:29.857 iops : min= 608, max= 752, avg=689.47, stdev=38.38, samples=19 00:36:29.857 lat (msec) : 20=19.27%, 50=80.73% 00:36:29.857 cpu : usr=99.00%, sys=0.68%, ctx=21, majf=0, minf=49 00:36:29.857 IO depths : 1=2.2%, 2=4.5%, 4=12.0%, 8=69.9%, 16=11.5%, 32=0.0%, >=64=0.0% 00:36:29.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 complete : 0=0.0%, 4=90.6%, 8=5.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 issued rwts: total=6914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.857 filename2: (groupid=0, jobs=1): err= 0: pid=1002390: Wed Nov 20 09:21:54 2024 00:36:29.857 read: IOPS=677, BW=2711KiB/s (2776kB/s)(26.5MiB/10014msec) 00:36:29.857 slat (nsec): min=5480, max=95747, avg=22246.73, stdev=15679.43 00:36:29.857 clat (usec): min=10540, max=41335, avg=23396.30, stdev=3002.90 00:36:29.857 lat (usec): min=10549, max=41343, avg=23418.54, stdev=3004.46 00:36:29.857 clat percentiles (usec): 00:36:29.857 | 1.00th=[14746], 5.00th=[16581], 10.00th=[20317], 20.00th=[22676], 00:36:29.857 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:29.857 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[26608], 00:36:29.857 | 99.00th=[33424], 99.50th=[35390], 99.90th=[38011], 99.95th=[39584], 00:36:29.857 | 99.99th=[41157] 00:36:29.857 bw ( KiB/s): min= 2560, max= 3024, per=4.08%, avg=2710.16, stdev=120.87, samples=19 00:36:29.857 iops : min= 640, max= 756, avg=677.53, stdev=30.20, samples=19 00:36:29.857 lat (msec) : 20=9.65%, 50=90.35% 00:36:29.857 cpu : usr=98.94%, sys=0.62%, ctx=49, majf=0, minf=41 00:36:29.857 IO depths : 1=4.9%, 2=9.9%, 4=21.0%, 8=56.3%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:29.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 complete : 0=0.0%, 4=93.0%, 8=1.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.857 issued rwts: total=6788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.857 filename2: (groupid=0, jobs=1): err= 0: pid=1002391: Wed Nov 20 09:21:54 2024 00:36:29.857 read: IOPS=674, BW=2698KiB/s (2763kB/s)(26.4MiB/10006msec) 00:36:29.857 slat (nsec): min=5407, max=88912, avg=17060.55, stdev=12033.24 00:36:29.857 clat (usec): min=6512, max=47300, avg=23573.85, stdev=3434.58 00:36:29.857 lat (usec): min=6521, max=47317, avg=23590.91, stdev=3435.42 00:36:29.857 clat percentiles (usec): 00:36:29.857 | 1.00th=[12780], 5.00th=[16581], 10.00th=[20055], 20.00th=[22938], 00:36:29.857 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:29.857 | 70.00th=[24511], 80.00th=[25035], 90.00th=[25822], 95.00th=[27132], 00:36:29.858 | 99.00th=[36439], 99.50th=[37487], 99.90th=[40633], 99.95th=[47449], 00:36:29.858 | 99.99th=[47449] 00:36:29.858 bw ( KiB/s): min= 2436, max= 3008, per=4.04%, avg=2687.37, stdev=132.00, samples=19 00:36:29.858 iops : min= 609, max= 752, avg=671.84, stdev=33.00, samples=19 00:36:29.858 lat (msec) : 10=0.50%, 20=9.50%, 50=90.00% 00:36:29.858 cpu : usr=98.75%, sys=0.83%, ctx=65, majf=0, minf=30 00:36:29.858 IO depths : 1=4.0%, 2=8.3%, 4=18.4%, 8=60.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:29.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.858 complete : 0=0.0%, 4=92.3%, 8=2.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.858 issued rwts: total=6750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.858 filename2: (groupid=0, jobs=1): err= 0: pid=1002392: Wed Nov 20 09:21:54 2024 00:36:29.858 read: IOPS=707, BW=2831KiB/s (2899kB/s)(27.7MiB/10014msec) 00:36:29.858 slat (usec): min=5, max=127, avg=15.80, stdev=14.45 00:36:29.858 clat (usec): min=8780, max=44601, avg=22488.25, stdev=4251.13 00:36:29.858 lat (usec): min=8788, max=44609, avg=22504.05, stdev=4253.17 00:36:29.858 clat percentiles (usec): 00:36:29.858 | 1.00th=[12911], 5.00th=[14746], 10.00th=[16057], 20.00th=[19006], 00:36:29.858 | 30.00th=[22152], 40.00th=[22938], 50.00th=[23462], 60.00th=[23725], 00:36:29.858 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[28181], 00:36:29.858 | 99.00th=[36439], 99.50th=[39060], 99.90th=[43779], 99.95th=[44827], 00:36:29.858 | 99.99th=[44827] 00:36:29.858 bw ( KiB/s): min= 2560, max= 3120, per=4.27%, avg=2836.21, stdev=197.32, samples=19 00:36:29.858 iops : min= 640, max= 780, avg=709.05, stdev=49.33, samples=19 00:36:29.858 lat (msec) : 10=0.13%, 20=23.98%, 50=75.89% 00:36:29.858 cpu : usr=99.08%, sys=0.61%, ctx=14, majf=0, minf=33 00:36:29.858 IO depths : 1=2.9%, 2=5.9%, 4=15.1%, 8=65.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:36:29.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.858 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.858 issued rwts: total=7088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:29.858 00:36:29.858 Run status group 0 (all jobs): 00:36:29.858 READ: bw=64.9MiB/s (68.0MB/s), 2659KiB/s-3123KiB/s (2723kB/s-3197kB/s), io=652MiB (684MB), run=10004-10047msec 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 bdev_null0 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 [2024-11-20 09:21:54.300825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 bdev_null1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:29.858 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:29.858 { 00:36:29.858 "params": { 00:36:29.859 "name": "Nvme$subsystem", 00:36:29.859 "trtype": "$TEST_TRANSPORT", 00:36:29.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:29.859 "adrfam": "ipv4", 00:36:29.859 "trsvcid": "$NVMF_PORT", 00:36:29.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:29.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:29.859 "hdgst": ${hdgst:-false}, 00:36:29.859 "ddgst": ${ddgst:-false} 00:36:29.859 }, 00:36:29.859 "method": "bdev_nvme_attach_controller" 00:36:29.859 } 00:36:29.859 EOF 00:36:29.859 )") 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:29.859 { 00:36:29.859 "params": { 00:36:29.859 "name": "Nvme$subsystem", 00:36:29.859 "trtype": "$TEST_TRANSPORT", 00:36:29.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:29.859 "adrfam": "ipv4", 00:36:29.859 "trsvcid": "$NVMF_PORT", 00:36:29.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:29.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:29.859 "hdgst": ${hdgst:-false}, 00:36:29.859 "ddgst": ${ddgst:-false} 00:36:29.859 }, 00:36:29.859 "method": "bdev_nvme_attach_controller" 00:36:29.859 } 00:36:29.859 EOF 00:36:29.859 )") 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:29.859 "params": { 00:36:29.859 "name": "Nvme0", 00:36:29.859 "trtype": "tcp", 00:36:29.859 "traddr": "10.0.0.2", 00:36:29.859 "adrfam": "ipv4", 00:36:29.859 "trsvcid": "4420", 00:36:29.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:29.859 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:29.859 "hdgst": false, 00:36:29.859 "ddgst": false 00:36:29.859 }, 00:36:29.859 "method": "bdev_nvme_attach_controller" 00:36:29.859 },{ 00:36:29.859 "params": { 00:36:29.859 "name": "Nvme1", 00:36:29.859 "trtype": "tcp", 00:36:29.859 "traddr": "10.0.0.2", 00:36:29.859 "adrfam": "ipv4", 00:36:29.859 "trsvcid": "4420", 00:36:29.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:29.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:29.859 "hdgst": false, 00:36:29.859 "ddgst": false 00:36:29.859 }, 00:36:29.859 "method": "bdev_nvme_attach_controller" 00:36:29.859 }' 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:29.859 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.859 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:29.859 ... 00:36:29.859 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:29.859 ... 00:36:29.859 fio-3.35 00:36:29.859 Starting 4 threads 00:36:35.146 00:36:35.146 filename0: (groupid=0, jobs=1): err= 0: pid=1004770: Wed Nov 20 09:22:00 2024 00:36:35.146 read: IOPS=2834, BW=22.1MiB/s (23.2MB/s)(111MiB/5001msec) 00:36:35.146 slat (usec): min=5, max=149, avg= 6.80, stdev= 3.48 00:36:35.146 clat (usec): min=973, max=6711, avg=2804.02, stdev=503.47 00:36:35.146 lat (usec): min=979, max=6738, avg=2810.81, stdev=503.58 00:36:35.146 clat percentiles (usec): 00:36:35.146 | 1.00th=[ 1942], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:36:35.146 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:35.146 | 70.00th=[ 2704], 80.00th=[ 2933], 90.00th=[ 3720], 95.00th=[ 3916], 00:36:35.146 | 99.00th=[ 4293], 99.50th=[ 4686], 99.90th=[ 5538], 99.95th=[ 6652], 00:36:35.146 | 99.99th=[ 6718] 00:36:35.146 bw ( KiB/s): min=20424, max=23200, per=24.41%, avg=22676.44, stdev=872.30, samples=9 00:36:35.146 iops : min= 2553, max= 2900, avg=2834.56, stdev=109.04, samples=9 00:36:35.146 lat (usec) : 1000=0.04% 00:36:35.146 lat (msec) : 2=1.25%, 4=95.08%, 10=3.64% 00:36:35.146 cpu : usr=97.08%, sys=2.68%, ctx=7, majf=0, minf=36 00:36:35.146 IO depths : 1=0.1%, 2=0.5%, 4=71.8%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:35.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.146 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.146 issued rwts: total=14174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:35.146 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:35.146 filename0: (groupid=0, jobs=1): err= 0: pid=1004771: Wed Nov 20 09:22:00 2024 00:36:35.146 read: IOPS=2976, BW=23.3MiB/s (24.4MB/s)(116MiB/5002msec) 00:36:35.146 slat (nsec): min=5436, max=57404, avg=8353.72, stdev=3199.24 00:36:35.146 clat (usec): min=1245, max=5605, avg=2664.86, stdev=303.65 00:36:35.146 lat (usec): min=1263, max=5616, avg=2673.22, stdev=303.79 00:36:35.146 clat percentiles (usec): 00:36:35.146 | 1.00th=[ 1876], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2507], 00:36:35.146 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:35.146 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2933], 95.00th=[ 3228], 00:36:35.146 | 99.00th=[ 3785], 99.50th=[ 4015], 99.90th=[ 4555], 99.95th=[ 4752], 00:36:35.146 | 99.99th=[ 5604] 00:36:35.146 bw ( KiB/s): min=22048, max=24208, per=25.64%, avg=23819.20, stdev=649.22, samples=10 00:36:35.146 iops : min= 2756, max= 3026, avg=2977.40, stdev=81.15, samples=10 00:36:35.146 lat (msec) : 2=1.52%, 4=97.96%, 10=0.52% 00:36:35.146 cpu : usr=96.40%, sys=3.34%, ctx=8, majf=0, minf=48 00:36:35.146 IO depths : 1=0.1%, 2=0.4%, 4=70.8%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:35.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.146 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.146 issued rwts: total=14889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:35.146 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:35.146 filename1: (groupid=0, jobs=1): err= 0: pid=1004772: Wed Nov 20 09:22:00 2024 00:36:35.146 read: IOPS=2964, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:36:35.146 slat (nsec): min=5400, max=76943, avg=6564.62, stdev=2916.21 00:36:35.146 clat (usec): min=1126, max=6518, avg=2680.77, stdev=368.32 00:36:35.146 lat (usec): min=1132, max=6524, avg=2687.33, stdev=368.63 00:36:35.146 clat percentiles (usec): 00:36:35.146 | 1.00th=[ 1876], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2507], 00:36:35.146 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:35.146 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2999], 95.00th=[ 3458], 00:36:35.146 | 99.00th=[ 4015], 99.50th=[ 4293], 99.90th=[ 4948], 99.95th=[ 5669], 00:36:35.146 | 99.99th=[ 6521] 00:36:35.146 bw ( KiB/s): min=21264, max=24224, per=25.45%, avg=23642.67, stdev=915.92, samples=9 00:36:35.146 iops : min= 2658, max= 3028, avg=2955.33, stdev=114.49, samples=9 00:36:35.146 lat (msec) : 2=1.84%, 4=97.14%, 10=1.02% 00:36:35.146 cpu : usr=96.76%, sys=2.98%, ctx=7, majf=0, minf=36 00:36:35.146 IO depths : 1=0.1%, 2=0.3%, 4=71.3%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:35.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.146 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.146 issued rwts: total=14826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:35.146 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:35.146 filename1: (groupid=0, jobs=1): err= 0: pid=1004773: Wed Nov 20 09:22:00 2024 00:36:35.146 read: IOPS=2908, BW=22.7MiB/s (23.8MB/s)(115MiB/5042msec) 00:36:35.146 slat (nsec): min=5399, max=70737, avg=6591.66, stdev=3029.22 00:36:35.146 clat (usec): min=1191, max=41629, avg=2718.86, stdev=669.57 00:36:35.146 lat (usec): min=1197, max=41635, avg=2725.45, stdev=669.68 00:36:35.146 clat percentiles (usec): 00:36:35.146 | 1.00th=[ 1942], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2507], 00:36:35.146 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:36:35.146 | 70.00th=[ 2737], 80.00th=[ 2835], 90.00th=[ 3064], 95.00th=[ 3490], 00:36:35.146 | 99.00th=[ 4047], 99.50th=[ 4359], 99.90th=[ 5014], 99.95th=[ 5538], 00:36:35.146 | 99.99th=[41681] 00:36:35.146 bw ( KiB/s): min=21216, max=24224, per=25.25%, avg=23460.80, stdev=829.16, samples=10 00:36:35.146 iops : min= 2652, max= 3028, avg=2932.60, stdev=103.64, samples=10 00:36:35.146 lat (msec) : 2=1.43%, 4=97.36%, 10=1.19%, 50=0.02% 00:36:35.146 cpu : usr=96.87%, sys=2.88%, ctx=12, majf=0, minf=80 00:36:35.146 IO depths : 1=0.1%, 2=0.4%, 4=70.6%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:35.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.146 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.146 issued rwts: total=14666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:35.146 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:35.146 00:36:35.146 Run status group 0 (all jobs): 00:36:35.146 READ: bw=90.7MiB/s (95.1MB/s), 22.1MiB/s-23.3MiB/s (23.2MB/s-24.4MB/s), io=457MiB (480MB), run=5001-5042msec 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.408 00:36:35.408 real 0m24.800s 00:36:35.408 user 5m20.634s 00:36:35.408 sys 0m4.190s 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:35.408 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:35.408 ************************************ 00:36:35.408 END TEST fio_dif_rand_params 00:36:35.408 ************************************ 00:36:35.408 09:22:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:35.408 09:22:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:35.408 09:22:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:35.408 09:22:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:35.408 ************************************ 00:36:35.408 START TEST fio_dif_digest 00:36:35.408 ************************************ 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:35.408 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:35.409 bdev_null0 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.409 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:35.409 [2024-11-20 09:22:00.930764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:35.669 { 00:36:35.669 "params": { 00:36:35.669 "name": "Nvme$subsystem", 00:36:35.669 "trtype": "$TEST_TRANSPORT", 00:36:35.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:35.669 "adrfam": "ipv4", 00:36:35.669 "trsvcid": "$NVMF_PORT", 00:36:35.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:35.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:35.669 "hdgst": ${hdgst:-false}, 00:36:35.669 "ddgst": ${ddgst:-false} 00:36:35.669 }, 00:36:35.669 "method": "bdev_nvme_attach_controller" 00:36:35.669 } 00:36:35.669 EOF 00:36:35.669 )") 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:35.669 "params": { 00:36:35.669 "name": "Nvme0", 00:36:35.669 "trtype": "tcp", 00:36:35.669 "traddr": "10.0.0.2", 00:36:35.669 "adrfam": "ipv4", 00:36:35.669 "trsvcid": "4420", 00:36:35.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:35.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:35.669 "hdgst": true, 00:36:35.669 "ddgst": true 00:36:35.669 }, 00:36:35.669 "method": "bdev_nvme_attach_controller" 00:36:35.669 }' 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:35.669 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:35.669 09:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:35.669 09:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:35.669 09:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:35.669 09:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:35.930 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:35.930 ... 00:36:35.930 fio-3.35 00:36:35.930 Starting 3 threads 00:36:48.166 00:36:48.166 filename0: (groupid=0, jobs=1): err= 0: pid=1006127: Wed Nov 20 09:22:11 2024 00:36:48.166 read: IOPS=347, BW=43.5MiB/s (45.6MB/s)(437MiB/10046msec) 00:36:48.166 slat (nsec): min=5768, max=33038, avg=7566.74, stdev=1544.15 00:36:48.166 clat (usec): min=5994, max=50093, avg=8603.66, stdev=2657.34 00:36:48.166 lat (usec): min=6001, max=50100, avg=8611.23, stdev=2657.32 00:36:48.166 clat percentiles (usec): 00:36:48.166 | 1.00th=[ 7046], 5.00th=[ 7439], 10.00th=[ 7635], 20.00th=[ 7898], 00:36:48.166 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:36:48.166 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9503], 00:36:48.166 | 99.00th=[10028], 99.50th=[11338], 99.90th=[50070], 99.95th=[50070], 00:36:48.166 | 99.99th=[50070] 00:36:48.166 bw ( KiB/s): min=40960, max=46336, per=38.27%, avg=44697.60, stdev=1536.90, samples=20 00:36:48.166 iops : min= 320, max= 362, avg=349.20, stdev=12.01, samples=20 00:36:48.166 lat (msec) : 10=98.88%, 20=0.72%, 50=0.29%, 100=0.11% 00:36:48.166 cpu : usr=95.41%, sys=4.35%, ctx=29, majf=0, minf=82 00:36:48.166 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:48.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.166 issued rwts: total=3494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.166 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:48.166 filename0: (groupid=0, jobs=1): err= 0: pid=1006128: Wed Nov 20 09:22:11 2024 00:36:48.166 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(359MiB/10045msec) 00:36:48.166 slat (nsec): min=5793, max=33538, avg=9002.32, stdev=1209.07 00:36:48.166 clat (usec): min=6103, max=50075, avg=10477.91, stdev=1378.82 00:36:48.167 lat (usec): min=6111, max=50081, avg=10486.91, stdev=1378.79 00:36:48.167 clat percentiles (usec): 00:36:48.167 | 1.00th=[ 7177], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:36:48.167 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:36:48.167 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:36:48.167 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13829], 99.95th=[45876], 00:36:48.167 | 99.99th=[50070] 00:36:48.167 bw ( KiB/s): min=35584, max=38400, per=31.42%, avg=36697.60, stdev=660.68, samples=20 00:36:48.167 iops : min= 278, max= 300, avg=286.70, stdev= 5.16, samples=20 00:36:48.167 lat (msec) : 10=27.88%, 20=72.05%, 50=0.03%, 100=0.03% 00:36:48.167 cpu : usr=94.17%, sys=5.61%, ctx=12, majf=0, minf=135 00:36:48.167 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:48.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.167 issued rwts: total=2869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.167 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:48.167 filename0: (groupid=0, jobs=1): err= 0: pid=1006129: Wed Nov 20 09:22:11 2024 00:36:48.167 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(350MiB/10044msec) 00:36:48.167 slat (nsec): min=5761, max=31053, avg=7647.67, stdev=1752.70 00:36:48.167 clat (usec): min=6698, max=49401, avg=10725.87, stdev=1365.74 00:36:48.167 lat (usec): min=6704, max=49407, avg=10733.51, stdev=1365.76 00:36:48.167 clat percentiles (usec): 00:36:48.167 | 1.00th=[ 7635], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:36:48.167 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:36:48.167 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:36:48.167 | 99.00th=[12911], 99.50th=[13304], 99.90th=[14746], 99.95th=[44827], 00:36:48.167 | 99.99th=[49546] 00:36:48.167 bw ( KiB/s): min=34560, max=37376, per=30.70%, avg=35852.80, stdev=702.19, samples=20 00:36:48.167 iops : min= 270, max= 292, avg=280.10, stdev= 5.49, samples=20 00:36:48.167 lat (msec) : 10=19.98%, 20=79.95%, 50=0.07% 00:36:48.167 cpu : usr=94.11%, sys=5.67%, ctx=21, majf=0, minf=174 00:36:48.167 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:48.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.167 issued rwts: total=2803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.167 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:48.167 00:36:48.167 Run status group 0 (all jobs): 00:36:48.167 READ: bw=114MiB/s (120MB/s), 34.9MiB/s-43.5MiB/s (36.6MB/s-45.6MB/s), io=1146MiB (1201MB), run=10044-10046msec 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.167 00:36:48.167 real 0m11.271s 00:36:48.167 user 0m40.924s 00:36:48.167 sys 0m1.881s 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:48.167 09:22:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:48.167 ************************************ 00:36:48.167 END TEST fio_dif_digest 00:36:48.167 ************************************ 00:36:48.167 09:22:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:48.167 09:22:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:48.167 rmmod nvme_tcp 00:36:48.167 rmmod nvme_fabrics 00:36:48.167 rmmod nvme_keyring 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 995810 ']' 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 995810 00:36:48.167 09:22:12 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 995810 ']' 00:36:48.167 09:22:12 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 995810 00:36:48.167 09:22:12 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:36:48.167 09:22:12 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:48.167 09:22:12 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 995810 00:36:48.167 09:22:12 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:48.167 09:22:12 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:48.167 09:22:12 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 995810' 00:36:48.167 killing process with pid 995810 00:36:48.167 09:22:12 nvmf_dif -- common/autotest_common.sh@973 -- # kill 995810 00:36:48.167 09:22:12 nvmf_dif -- common/autotest_common.sh@978 -- # wait 995810 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:48.167 09:22:12 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:50.715 Waiting for block devices as requested 00:36:50.715 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:50.715 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:50.715 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:50.715 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:50.715 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:50.715 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:50.975 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:50.975 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:50.975 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:51.237 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:51.237 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:51.498 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:51.498 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:51.498 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:51.759 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:51.759 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:51.759 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:52.020 09:22:17 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:52.020 09:22:17 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:52.020 09:22:17 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:52.020 09:22:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:52.020 09:22:17 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:52.020 09:22:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:52.281 09:22:17 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:52.281 09:22:17 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:52.281 09:22:17 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.281 09:22:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:52.281 09:22:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.192 09:22:19 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:54.192 00:36:54.192 real 1m18.722s 00:36:54.192 user 7m52.201s 00:36:54.192 sys 0m21.679s 00:36:54.192 09:22:19 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:54.192 09:22:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:54.192 ************************************ 00:36:54.192 END TEST nvmf_dif 00:36:54.192 ************************************ 00:36:54.192 09:22:19 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:54.192 09:22:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:54.192 09:22:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:54.192 09:22:19 -- common/autotest_common.sh@10 -- # set +x 00:36:54.192 ************************************ 00:36:54.192 START TEST nvmf_abort_qd_sizes 00:36:54.192 ************************************ 00:36:54.192 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:54.453 * Looking for test storage... 00:36:54.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:54.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.453 --rc genhtml_branch_coverage=1 00:36:54.453 --rc genhtml_function_coverage=1 00:36:54.453 --rc genhtml_legend=1 00:36:54.453 --rc geninfo_all_blocks=1 00:36:54.453 --rc geninfo_unexecuted_blocks=1 00:36:54.453 00:36:54.453 ' 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:54.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.453 --rc genhtml_branch_coverage=1 00:36:54.453 --rc genhtml_function_coverage=1 00:36:54.453 --rc genhtml_legend=1 00:36:54.453 --rc geninfo_all_blocks=1 00:36:54.453 --rc geninfo_unexecuted_blocks=1 00:36:54.453 00:36:54.453 ' 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:54.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.453 --rc genhtml_branch_coverage=1 00:36:54.453 --rc genhtml_function_coverage=1 00:36:54.453 --rc genhtml_legend=1 00:36:54.453 --rc geninfo_all_blocks=1 00:36:54.453 --rc geninfo_unexecuted_blocks=1 00:36:54.453 00:36:54.453 ' 00:36:54.453 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:54.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.453 --rc genhtml_branch_coverage=1 00:36:54.453 --rc genhtml_function_coverage=1 00:36:54.453 --rc genhtml_legend=1 00:36:54.453 --rc geninfo_all_blocks=1 00:36:54.453 --rc geninfo_unexecuted_blocks=1 00:36:54.453 00:36:54.454 ' 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:54.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:54.454 09:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:02.594 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:02.595 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:02.595 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:02.595 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:02.595 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:02.595 09:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:02.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:02.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:37:02.595 00:37:02.595 --- 10.0.0.2 ping statistics --- 00:37:02.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.595 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:02.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:02.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:37:02.595 00:37:02.595 --- 10.0.0.1 ping statistics --- 00:37:02.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.595 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:02.595 09:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:05.164 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:05.164 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:05.164 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:05.164 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:05.164 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:05.164 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:05.164 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:05.164 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:05.164 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:05.426 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:05.426 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:05.426 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:05.426 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:05.426 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:05.426 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:05.426 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:05.426 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:05.687 09:22:31 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:05.687 09:22:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:05.687 09:22:31 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:05.687 09:22:31 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:05.687 09:22:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:05.687 09:22:31 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1015612 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1015612 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1015612 ']' 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:05.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:05.946 09:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:05.946 [2024-11-20 09:22:31.301156] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:37:05.946 [2024-11-20 09:22:31.301238] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:05.946 [2024-11-20 09:22:31.400765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:05.946 [2024-11-20 09:22:31.455505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:05.946 [2024-11-20 09:22:31.455560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:05.946 [2024-11-20 09:22:31.455570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:05.946 [2024-11-20 09:22:31.455577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:05.946 [2024-11-20 09:22:31.455584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:05.946 [2024-11-20 09:22:31.458053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.946 [2024-11-20 09:22:31.458217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:05.946 [2024-11-20 09:22:31.458302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.946 [2024-11-20 09:22:31.458302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:06.886 09:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:06.886 ************************************ 00:37:06.886 START TEST spdk_target_abort 00:37:06.886 ************************************ 00:37:06.886 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:06.886 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:06.886 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:06.886 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.886 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.146 spdk_targetn1 00:37:07.146 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.146 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:07.146 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.146 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.146 [2024-11-20 09:22:32.511814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.147 [2024-11-20 09:22:32.560144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:07.147 09:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:07.407 [2024-11-20 09:22:32.744733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:232 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:07.407 [2024-11-20 09:22:32.744770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0020 p:1 m:0 dnr:0 00:37:07.407 [2024-11-20 09:22:32.761272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:760 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:07.407 [2024-11-20 09:22:32.761293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0061 p:1 m:0 dnr:0 00:37:07.407 [2024-11-20 09:22:32.762370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:840 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:07.407 [2024-11-20 09:22:32.762386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:006a p:1 m:0 dnr:0 00:37:07.407 [2024-11-20 09:22:32.769047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1032 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:07.407 [2024-11-20 09:22:32.769071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0082 p:1 m:0 dnr:0 00:37:07.407 [2024-11-20 09:22:32.839326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3256 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:07.407 [2024-11-20 09:22:32.839348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0098 p:0 m:0 dnr:0 00:37:10.712 Initializing NVMe Controllers 00:37:10.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:10.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:10.712 Initialization complete. Launching workers. 00:37:10.712 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11607, failed: 5 00:37:10.712 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2864, failed to submit 8748 00:37:10.712 success 766, unsuccessful 2098, failed 0 00:37:10.712 09:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:10.712 09:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:10.712 [2024-11-20 09:22:35.927482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:488 len:8 PRP1 0x200004e50000 PRP2 0x0 00:37:10.712 [2024-11-20 09:22:35.927520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:37:10.712 [2024-11-20 09:22:36.020377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2536 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:37:10.712 [2024-11-20 09:22:36.020404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:10.712 [2024-11-20 09:22:36.068332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:3648 len:8 PRP1 0x200004e42000 PRP2 0x0 00:37:10.712 [2024-11-20 09:22:36.068355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00d6 p:0 m:0 dnr:0 00:37:12.214 [2024-11-20 09:22:37.532019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:38096 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:12.214 [2024-11-20 09:22:37.532048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:00a3 p:1 m:0 dnr:0 00:37:13.601 Initializing NVMe Controllers 00:37:13.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:13.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:13.601 Initialization complete. Launching workers. 00:37:13.601 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8718, failed: 4 00:37:13.601 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1200, failed to submit 7522 00:37:13.601 success 382, unsuccessful 818, failed 0 00:37:13.601 09:22:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:13.601 09:22:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:13.869 [2024-11-20 09:22:39.250472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:179 nsid:1 lba:2848 len:8 PRP1 0x200004b18000 PRP2 0x0 00:37:13.869 [2024-11-20 09:22:39.250500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:179 cdw0:0 sqhd:00cd p:1 m:0 dnr:0 00:37:17.167 Initializing NVMe Controllers 00:37:17.167 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:17.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:17.167 Initialization complete. Launching workers. 00:37:17.167 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43949, failed: 1 00:37:17.167 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2741, failed to submit 41209 00:37:17.167 success 587, unsuccessful 2154, failed 0 00:37:17.167 09:22:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:17.167 09:22:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.167 09:22:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:17.167 09:22:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.167 09:22:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:17.167 09:22:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.167 09:22:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1015612 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1015612 ']' 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1015612 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1015612 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1015612' 00:37:19.080 killing process with pid 1015612 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1015612 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1015612 00:37:19.080 00:37:19.080 real 0m12.080s 00:37:19.080 user 0m49.200s 00:37:19.080 sys 0m2.008s 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:19.080 ************************************ 00:37:19.080 END TEST spdk_target_abort 00:37:19.080 ************************************ 00:37:19.080 09:22:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:19.080 09:22:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:19.080 09:22:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:19.080 09:22:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:19.080 ************************************ 00:37:19.080 START TEST kernel_target_abort 00:37:19.080 ************************************ 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:19.080 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:19.081 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:19.081 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:19.081 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:19.081 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:19.081 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:19.081 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:19.081 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:19.081 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:19.081 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:19.081 09:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:22.386 Waiting for block devices as requested 00:37:22.386 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:22.386 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:22.386 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:22.646 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:22.646 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:22.646 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:22.907 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:22.907 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:22.907 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:23.168 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:23.168 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:23.429 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:23.429 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:23.429 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:23.429 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:23.689 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:23.689 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:23.949 No valid GPT data, bailing 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:23.949 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:24.210 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:24.210 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:24.210 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:24.210 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:24.210 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:24.210 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:24.210 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:24.211 00:37:24.211 Discovery Log Number of Records 2, Generation counter 2 00:37:24.211 =====Discovery Log Entry 0====== 00:37:24.211 trtype: tcp 00:37:24.211 adrfam: ipv4 00:37:24.211 subtype: current discovery subsystem 00:37:24.211 treq: not specified, sq flow control disable supported 00:37:24.211 portid: 1 00:37:24.211 trsvcid: 4420 00:37:24.211 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:24.211 traddr: 10.0.0.1 00:37:24.211 eflags: none 00:37:24.211 sectype: none 00:37:24.211 =====Discovery Log Entry 1====== 00:37:24.211 trtype: tcp 00:37:24.211 adrfam: ipv4 00:37:24.211 subtype: nvme subsystem 00:37:24.211 treq: not specified, sq flow control disable supported 00:37:24.211 portid: 1 00:37:24.211 trsvcid: 4420 00:37:24.211 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:24.211 traddr: 10.0.0.1 00:37:24.211 eflags: none 00:37:24.211 sectype: none 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:24.211 09:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:27.515 Initializing NVMe Controllers 00:37:27.515 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:27.515 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:27.515 Initialization complete. Launching workers. 00:37:27.515 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67848, failed: 0 00:37:27.515 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67848, failed to submit 0 00:37:27.515 success 0, unsuccessful 67848, failed 0 00:37:27.515 09:22:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:27.515 09:22:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:30.818 Initializing NVMe Controllers 00:37:30.818 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:30.818 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:30.818 Initialization complete. Launching workers. 00:37:30.818 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 118196, failed: 0 00:37:30.818 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29730, failed to submit 88466 00:37:30.818 success 0, unsuccessful 29730, failed 0 00:37:30.818 09:22:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:30.818 09:22:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:34.118 Initializing NVMe Controllers 00:37:34.118 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:34.118 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:34.118 Initialization complete. Launching workers. 00:37:34.118 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146994, failed: 0 00:37:34.118 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36802, failed to submit 110192 00:37:34.118 success 0, unsuccessful 36802, failed 0 00:37:34.118 09:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:34.118 09:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:34.118 09:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:34.118 09:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:34.118 09:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:34.118 09:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:34.118 09:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:34.118 09:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:34.118 09:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:34.118 09:22:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:37.425 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:37.425 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:39.350 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:39.350 00:37:39.350 real 0m20.374s 00:37:39.350 user 0m9.955s 00:37:39.350 sys 0m6.087s 00:37:39.350 09:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.350 09:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:39.350 ************************************ 00:37:39.350 END TEST kernel_target_abort 00:37:39.350 ************************************ 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:39.350 rmmod nvme_tcp 00:37:39.350 rmmod nvme_fabrics 00:37:39.350 rmmod nvme_keyring 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1015612 ']' 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1015612 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1015612 ']' 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1015612 00:37:39.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1015612) - No such process 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1015612 is not found' 00:37:39.350 Process with pid 1015612 is not found 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:39.350 09:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:43.560 Waiting for block devices as requested 00:37:43.560 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:43.560 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:43.560 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:43.560 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:43.560 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:43.560 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:43.560 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:43.560 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:43.560 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:43.822 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:43.822 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:43.822 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:44.083 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:44.083 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:44.083 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:44.344 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:44.344 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:44.605 09:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:44.605 09:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:44.605 09:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:44.605 09:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:44.605 09:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:44.605 09:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:44.605 09:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:44.605 09:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:44.605 09:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.605 09:23:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:44.605 09:23:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.153 09:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:47.153 00:37:47.153 real 0m52.409s 00:37:47.153 user 1m4.581s 00:37:47.153 sys 0m19.254s 00:37:47.153 09:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:47.153 09:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:47.153 ************************************ 00:37:47.153 END TEST nvmf_abort_qd_sizes 00:37:47.153 ************************************ 00:37:47.154 09:23:12 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:47.154 09:23:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:47.154 09:23:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:47.154 09:23:12 -- common/autotest_common.sh@10 -- # set +x 00:37:47.154 ************************************ 00:37:47.154 START TEST keyring_file 00:37:47.154 ************************************ 00:37:47.154 09:23:12 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:47.154 * Looking for test storage... 00:37:47.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:47.154 09:23:12 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:47.154 09:23:12 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:37:47.154 09:23:12 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:47.154 09:23:12 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:47.154 09:23:12 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:47.154 09:23:12 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.154 --rc genhtml_branch_coverage=1 00:37:47.154 --rc genhtml_function_coverage=1 00:37:47.154 --rc genhtml_legend=1 00:37:47.154 --rc geninfo_all_blocks=1 00:37:47.154 --rc geninfo_unexecuted_blocks=1 00:37:47.154 00:37:47.154 ' 00:37:47.154 09:23:12 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.154 --rc genhtml_branch_coverage=1 00:37:47.154 --rc genhtml_function_coverage=1 00:37:47.154 --rc genhtml_legend=1 00:37:47.154 --rc geninfo_all_blocks=1 00:37:47.154 --rc geninfo_unexecuted_blocks=1 00:37:47.154 00:37:47.154 ' 00:37:47.154 09:23:12 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.154 --rc genhtml_branch_coverage=1 00:37:47.154 --rc genhtml_function_coverage=1 00:37:47.154 --rc genhtml_legend=1 00:37:47.154 --rc geninfo_all_blocks=1 00:37:47.154 --rc geninfo_unexecuted_blocks=1 00:37:47.154 00:37:47.154 ' 00:37:47.154 09:23:12 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.154 --rc genhtml_branch_coverage=1 00:37:47.154 --rc genhtml_function_coverage=1 00:37:47.154 --rc genhtml_legend=1 00:37:47.154 --rc geninfo_all_blocks=1 00:37:47.154 --rc geninfo_unexecuted_blocks=1 00:37:47.154 00:37:47.154 ' 00:37:47.154 09:23:12 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:47.154 09:23:12 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:47.154 09:23:12 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:47.154 09:23:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.154 09:23:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.154 09:23:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.154 09:23:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:47.154 09:23:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:47.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:47.154 09:23:12 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:47.154 09:23:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:47.154 09:23:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:47.154 09:23:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:47.154 09:23:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:47.154 09:23:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:47.154 09:23:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:47.154 09:23:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:47.154 09:23:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:47.154 09:23:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:47.154 09:23:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:47.154 09:23:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:47.154 09:23:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:47.154 09:23:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9BpZVOtO3G 00:37:47.154 09:23:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:47.155 09:23:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9BpZVOtO3G 00:37:47.155 09:23:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9BpZVOtO3G 00:37:47.155 09:23:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.9BpZVOtO3G 00:37:47.155 09:23:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:47.155 09:23:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:47.155 09:23:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:47.155 09:23:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:47.155 09:23:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:47.155 09:23:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:47.155 09:23:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.euf1dPXFMZ 00:37:47.155 09:23:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:47.155 09:23:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:47.155 09:23:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.euf1dPXFMZ 00:37:47.155 09:23:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.euf1dPXFMZ 00:37:47.155 09:23:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.euf1dPXFMZ 00:37:47.155 09:23:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=1025937 00:37:47.155 09:23:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1025937 00:37:47.155 09:23:12 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:47.155 09:23:12 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1025937 ']' 00:37:47.155 09:23:12 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:47.155 09:23:12 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:47.155 09:23:12 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:47.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:47.155 09:23:12 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:47.155 09:23:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:47.155 [2024-11-20 09:23:12.629335] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:37:47.155 [2024-11-20 09:23:12.629411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025937 ] 00:37:47.415 [2024-11-20 09:23:12.722695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.415 [2024-11-20 09:23:12.775986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:47.987 09:23:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:47.987 [2024-11-20 09:23:13.445945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:47.987 null0 00:37:47.987 [2024-11-20 09:23:13.477995] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:47.987 [2024-11-20 09:23:13.478405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.987 09:23:13 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.987 09:23:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:47.987 [2024-11-20 09:23:13.510062] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:48.249 request: 00:37:48.249 { 00:37:48.249 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:48.249 "secure_channel": false, 00:37:48.249 "listen_address": { 00:37:48.249 "trtype": "tcp", 00:37:48.249 "traddr": "127.0.0.1", 00:37:48.249 "trsvcid": "4420" 00:37:48.249 }, 00:37:48.249 "method": "nvmf_subsystem_add_listener", 00:37:48.249 "req_id": 1 00:37:48.249 } 00:37:48.249 Got JSON-RPC error response 00:37:48.249 response: 00:37:48.249 { 00:37:48.249 "code": -32602, 00:37:48.249 "message": "Invalid parameters" 00:37:48.249 } 00:37:48.249 09:23:13 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:48.249 09:23:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:48.249 09:23:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:48.249 09:23:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:48.249 09:23:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:48.249 09:23:13 keyring_file -- keyring/file.sh@47 -- # bperfpid=1025959 00:37:48.249 09:23:13 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1025959 /var/tmp/bperf.sock 00:37:48.249 09:23:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1025959 ']' 00:37:48.250 09:23:13 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:48.250 09:23:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:48.250 09:23:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:48.250 09:23:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:48.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:48.250 09:23:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:48.250 09:23:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:48.250 [2024-11-20 09:23:13.572741] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:37:48.250 [2024-11-20 09:23:13.572806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025959 ] 00:37:48.250 [2024-11-20 09:23:13.665788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.250 [2024-11-20 09:23:13.719499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:49.192 09:23:14 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:49.192 09:23:14 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:49.192 09:23:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9BpZVOtO3G 00:37:49.192 09:23:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9BpZVOtO3G 00:37:49.192 09:23:14 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.euf1dPXFMZ 00:37:49.192 09:23:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.euf1dPXFMZ 00:37:49.452 09:23:14 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:49.452 09:23:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:49.452 09:23:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:49.452 09:23:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:49.452 09:23:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:49.452 09:23:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.9BpZVOtO3G == \/\t\m\p\/\t\m\p\.\9\B\p\Z\V\O\t\O\3\G ]] 00:37:49.452 09:23:14 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:49.452 09:23:14 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:49.452 09:23:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:49.452 09:23:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:49.452 09:23:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:49.713 09:23:15 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.euf1dPXFMZ == \/\t\m\p\/\t\m\p\.\e\u\f\1\d\P\X\F\M\Z ]] 00:37:49.713 09:23:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:49.713 09:23:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:49.713 09:23:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:49.713 09:23:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:49.713 09:23:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:49.713 09:23:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:49.974 09:23:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:49.974 09:23:15 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:49.974 09:23:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:49.974 09:23:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:49.974 09:23:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:49.974 09:23:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:49.974 09:23:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:49.974 09:23:15 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:49.974 09:23:15 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:49.974 09:23:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:50.235 [2024-11-20 09:23:15.660666] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:50.235 nvme0n1 00:37:50.496 09:23:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:50.496 09:23:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:50.496 09:23:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:50.496 09:23:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:50.496 09:23:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:50.496 09:23:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:50.496 09:23:15 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:50.496 09:23:15 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:50.496 09:23:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:50.496 09:23:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:50.496 09:23:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:50.496 09:23:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:50.496 09:23:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:50.757 09:23:16 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:50.757 09:23:16 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:50.757 Running I/O for 1 seconds... 00:37:52.140 16183.00 IOPS, 63.21 MiB/s 00:37:52.140 Latency(us) 00:37:52.140 [2024-11-20T08:23:17.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:52.140 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:52.140 nvme0n1 : 1.01 16226.98 63.39 0.00 0.00 7870.69 5133.65 16274.77 00:37:52.140 [2024-11-20T08:23:17.669Z] =================================================================================================================== 00:37:52.140 [2024-11-20T08:23:17.669Z] Total : 16226.98 63.39 0.00 0.00 7870.69 5133.65 16274.77 00:37:52.140 { 00:37:52.140 "results": [ 00:37:52.140 { 00:37:52.140 "job": "nvme0n1", 00:37:52.140 "core_mask": "0x2", 00:37:52.140 "workload": "randrw", 00:37:52.140 "percentage": 50, 00:37:52.140 "status": "finished", 00:37:52.140 "queue_depth": 128, 00:37:52.140 "io_size": 4096, 00:37:52.140 "runtime": 1.005178, 00:37:52.140 "iops": 16226.976714571947, 00:37:52.140 "mibps": 63.38662779129667, 00:37:52.140 "io_failed": 0, 00:37:52.140 "io_timeout": 0, 00:37:52.140 "avg_latency_us": 7870.68586352768, 00:37:52.140 "min_latency_us": 5133.653333333334, 00:37:52.140 "max_latency_us": 16274.773333333333 00:37:52.140 } 00:37:52.140 ], 00:37:52.140 "core_count": 1 00:37:52.140 } 00:37:52.140 09:23:17 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:52.140 09:23:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:52.140 09:23:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:52.140 09:23:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:52.140 09:23:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:52.140 09:23:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:52.140 09:23:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:52.140 09:23:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.140 09:23:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:52.140 09:23:17 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:52.140 09:23:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:52.140 09:23:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:52.140 09:23:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:52.140 09:23:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:52.140 09:23:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.400 09:23:17 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:52.400 09:23:17 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:52.400 09:23:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:52.400 09:23:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:52.400 09:23:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:52.400 09:23:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:52.400 09:23:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:52.400 09:23:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:52.400 09:23:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:52.400 09:23:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:52.661 [2024-11-20 09:23:17.991562] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:52.661 [2024-11-20 09:23:17.991574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2279c10 (107): Transport endpoint is not connected 00:37:52.661 [2024-11-20 09:23:17.992569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2279c10 (9): Bad file descriptor 00:37:52.661 [2024-11-20 09:23:17.993571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:52.661 [2024-11-20 09:23:17.993581] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:52.661 [2024-11-20 09:23:17.993587] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:52.661 [2024-11-20 09:23:17.993598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:52.661 request: 00:37:52.661 { 00:37:52.661 "name": "nvme0", 00:37:52.661 "trtype": "tcp", 00:37:52.661 "traddr": "127.0.0.1", 00:37:52.661 "adrfam": "ipv4", 00:37:52.661 "trsvcid": "4420", 00:37:52.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:52.661 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:52.661 "prchk_reftag": false, 00:37:52.661 "prchk_guard": false, 00:37:52.661 "hdgst": false, 00:37:52.661 "ddgst": false, 00:37:52.661 "psk": "key1", 00:37:52.661 "allow_unrecognized_csi": false, 00:37:52.661 "method": "bdev_nvme_attach_controller", 00:37:52.661 "req_id": 1 00:37:52.661 } 00:37:52.661 Got JSON-RPC error response 00:37:52.661 response: 00:37:52.661 { 00:37:52.661 "code": -5, 00:37:52.661 "message": "Input/output error" 00:37:52.661 } 00:37:52.661 09:23:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:52.661 09:23:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:52.661 09:23:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:52.661 09:23:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:52.661 09:23:18 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:52.661 09:23:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:52.661 09:23:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:52.661 09:23:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:52.661 09:23:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:52.661 09:23:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.661 09:23:18 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:52.661 09:23:18 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:52.661 09:23:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:52.661 09:23:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:52.922 09:23:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:52.922 09:23:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:52.922 09:23:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.922 09:23:18 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:52.922 09:23:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:52.922 09:23:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:53.183 09:23:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:53.183 09:23:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:53.183 09:23:18 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:53.183 09:23:18 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:53.183 09:23:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:53.443 09:23:18 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:53.443 09:23:18 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.9BpZVOtO3G 00:37:53.443 09:23:18 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.9BpZVOtO3G 00:37:53.443 09:23:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:53.443 09:23:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.9BpZVOtO3G 00:37:53.443 09:23:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:53.443 09:23:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:53.443 09:23:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:53.443 09:23:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:53.443 09:23:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9BpZVOtO3G 00:37:53.443 09:23:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9BpZVOtO3G 00:37:53.704 [2024-11-20 09:23:19.029274] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9BpZVOtO3G': 0100660 00:37:53.704 [2024-11-20 09:23:19.029297] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:53.704 request: 00:37:53.704 { 00:37:53.704 "name": "key0", 00:37:53.704 "path": "/tmp/tmp.9BpZVOtO3G", 00:37:53.704 "method": "keyring_file_add_key", 00:37:53.704 "req_id": 1 00:37:53.704 } 00:37:53.704 Got JSON-RPC error response 00:37:53.704 response: 00:37:53.704 { 00:37:53.704 "code": -1, 00:37:53.704 "message": "Operation not permitted" 00:37:53.704 } 00:37:53.704 09:23:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:53.704 09:23:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:53.704 09:23:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:53.704 09:23:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:53.704 09:23:19 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.9BpZVOtO3G 00:37:53.704 09:23:19 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9BpZVOtO3G 00:37:53.704 09:23:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9BpZVOtO3G 00:37:53.964 09:23:19 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.9BpZVOtO3G 00:37:53.964 09:23:19 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:53.964 09:23:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:53.964 09:23:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:53.964 09:23:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:53.964 09:23:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:53.964 09:23:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:53.964 09:23:19 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:53.964 09:23:19 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:53.964 09:23:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:53.964 09:23:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:53.964 09:23:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:53.964 09:23:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:53.964 09:23:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:53.964 09:23:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:53.964 09:23:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:53.964 09:23:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:54.225 [2024-11-20 09:23:19.606733] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.9BpZVOtO3G': No such file or directory 00:37:54.225 [2024-11-20 09:23:19.606745] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:54.225 [2024-11-20 09:23:19.606758] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:54.225 [2024-11-20 09:23:19.606763] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:54.225 [2024-11-20 09:23:19.606769] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:54.225 [2024-11-20 09:23:19.606773] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:54.225 request: 00:37:54.225 { 00:37:54.225 "name": "nvme0", 00:37:54.225 "trtype": "tcp", 00:37:54.225 "traddr": "127.0.0.1", 00:37:54.225 "adrfam": "ipv4", 00:37:54.225 "trsvcid": "4420", 00:37:54.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:54.225 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:54.225 "prchk_reftag": false, 00:37:54.225 "prchk_guard": false, 00:37:54.225 "hdgst": false, 00:37:54.225 "ddgst": false, 00:37:54.225 "psk": "key0", 00:37:54.225 "allow_unrecognized_csi": false, 00:37:54.225 "method": "bdev_nvme_attach_controller", 00:37:54.225 "req_id": 1 00:37:54.225 } 00:37:54.225 Got JSON-RPC error response 00:37:54.225 response: 00:37:54.225 { 00:37:54.225 "code": -19, 00:37:54.225 "message": "No such device" 00:37:54.225 } 00:37:54.225 09:23:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:54.225 09:23:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:54.225 09:23:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:54.225 09:23:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:54.225 09:23:19 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:54.225 09:23:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:54.485 09:23:19 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:54.485 09:23:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:54.485 09:23:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:54.485 09:23:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:54.485 09:23:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:54.485 09:23:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:54.485 09:23:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.K2KHZoI8N5 00:37:54.485 09:23:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:54.485 09:23:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:54.485 09:23:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:54.485 09:23:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:54.486 09:23:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:54.486 09:23:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:54.486 09:23:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:54.486 09:23:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.K2KHZoI8N5 00:37:54.486 09:23:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.K2KHZoI8N5 00:37:54.486 09:23:19 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.K2KHZoI8N5 00:37:54.486 09:23:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.K2KHZoI8N5 00:37:54.486 09:23:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.K2KHZoI8N5 00:37:54.486 09:23:19 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:54.486 09:23:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:54.746 nvme0n1 00:37:54.746 09:23:20 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:54.746 09:23:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:54.746 09:23:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:54.746 09:23:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:54.746 09:23:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:54.746 09:23:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.006 09:23:20 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:55.006 09:23:20 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:55.006 09:23:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:55.267 09:23:20 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:55.267 09:23:20 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:55.267 09:23:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:55.267 09:23:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:55.267 09:23:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.267 09:23:20 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:55.267 09:23:20 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:55.267 09:23:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:55.267 09:23:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:55.267 09:23:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:55.267 09:23:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:55.267 09:23:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.528 09:23:20 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:55.528 09:23:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:55.528 09:23:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:55.789 09:23:21 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:55.789 09:23:21 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:55.789 09:23:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.789 09:23:21 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:55.789 09:23:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.K2KHZoI8N5 00:37:55.789 09:23:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.K2KHZoI8N5 00:37:56.050 09:23:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.euf1dPXFMZ 00:37:56.051 09:23:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.euf1dPXFMZ 00:37:56.312 09:23:21 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:56.312 09:23:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:56.312 nvme0n1 00:37:56.573 09:23:21 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:56.573 09:23:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:56.573 09:23:22 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:56.573 "subsystems": [ 00:37:56.573 { 00:37:56.573 "subsystem": "keyring", 00:37:56.573 "config": [ 00:37:56.573 { 00:37:56.573 "method": "keyring_file_add_key", 00:37:56.573 "params": { 00:37:56.573 "name": "key0", 00:37:56.573 "path": "/tmp/tmp.K2KHZoI8N5" 00:37:56.573 } 00:37:56.573 }, 00:37:56.573 { 00:37:56.573 "method": "keyring_file_add_key", 00:37:56.573 "params": { 00:37:56.573 "name": "key1", 00:37:56.573 "path": "/tmp/tmp.euf1dPXFMZ" 00:37:56.573 } 00:37:56.573 } 00:37:56.573 ] 00:37:56.573 }, 00:37:56.573 { 00:37:56.573 "subsystem": "iobuf", 00:37:56.573 "config": [ 00:37:56.573 { 00:37:56.573 "method": "iobuf_set_options", 00:37:56.573 "params": { 00:37:56.573 "small_pool_count": 8192, 00:37:56.573 "large_pool_count": 1024, 00:37:56.573 "small_bufsize": 8192, 00:37:56.573 "large_bufsize": 135168, 00:37:56.573 "enable_numa": false 00:37:56.573 } 00:37:56.573 } 00:37:56.573 ] 00:37:56.573 }, 00:37:56.573 { 00:37:56.573 "subsystem": "sock", 00:37:56.573 "config": [ 00:37:56.573 { 00:37:56.573 "method": "sock_set_default_impl", 00:37:56.573 "params": { 00:37:56.573 "impl_name": "posix" 00:37:56.573 } 00:37:56.573 }, 00:37:56.573 { 00:37:56.573 "method": "sock_impl_set_options", 00:37:56.573 "params": { 00:37:56.573 "impl_name": "ssl", 00:37:56.573 "recv_buf_size": 4096, 00:37:56.573 "send_buf_size": 4096, 00:37:56.573 "enable_recv_pipe": true, 00:37:56.573 "enable_quickack": false, 00:37:56.573 "enable_placement_id": 0, 00:37:56.573 "enable_zerocopy_send_server": true, 00:37:56.573 "enable_zerocopy_send_client": false, 00:37:56.573 "zerocopy_threshold": 0, 00:37:56.573 "tls_version": 0, 00:37:56.573 "enable_ktls": false 00:37:56.573 } 00:37:56.573 }, 00:37:56.573 { 00:37:56.573 "method": "sock_impl_set_options", 00:37:56.573 "params": { 00:37:56.573 "impl_name": "posix", 00:37:56.573 "recv_buf_size": 2097152, 00:37:56.573 "send_buf_size": 2097152, 00:37:56.573 "enable_recv_pipe": true, 00:37:56.573 "enable_quickack": false, 00:37:56.573 "enable_placement_id": 0, 00:37:56.573 "enable_zerocopy_send_server": true, 00:37:56.573 "enable_zerocopy_send_client": false, 00:37:56.573 "zerocopy_threshold": 0, 00:37:56.573 "tls_version": 0, 00:37:56.573 "enable_ktls": false 00:37:56.573 } 00:37:56.573 } 00:37:56.573 ] 00:37:56.573 }, 00:37:56.573 { 00:37:56.573 "subsystem": "vmd", 00:37:56.573 "config": [] 00:37:56.573 }, 00:37:56.573 { 00:37:56.573 "subsystem": "accel", 00:37:56.573 "config": [ 00:37:56.573 { 00:37:56.573 "method": "accel_set_options", 00:37:56.573 "params": { 00:37:56.573 "small_cache_size": 128, 00:37:56.573 "large_cache_size": 16, 00:37:56.573 "task_count": 2048, 00:37:56.573 "sequence_count": 2048, 00:37:56.573 "buf_count": 2048 00:37:56.573 } 00:37:56.573 } 00:37:56.573 ] 00:37:56.573 }, 00:37:56.573 { 00:37:56.573 "subsystem": "bdev", 00:37:56.573 "config": [ 00:37:56.573 { 00:37:56.573 "method": "bdev_set_options", 00:37:56.573 "params": { 00:37:56.573 "bdev_io_pool_size": 65535, 00:37:56.573 "bdev_io_cache_size": 256, 00:37:56.573 "bdev_auto_examine": true, 00:37:56.573 "iobuf_small_cache_size": 128, 00:37:56.573 "iobuf_large_cache_size": 16 00:37:56.573 } 00:37:56.574 }, 00:37:56.574 { 00:37:56.574 "method": "bdev_raid_set_options", 00:37:56.574 "params": { 00:37:56.574 "process_window_size_kb": 1024, 00:37:56.574 "process_max_bandwidth_mb_sec": 0 00:37:56.574 } 00:37:56.574 }, 00:37:56.574 { 00:37:56.574 "method": "bdev_iscsi_set_options", 00:37:56.574 "params": { 00:37:56.574 "timeout_sec": 30 00:37:56.574 } 00:37:56.574 }, 00:37:56.574 { 00:37:56.574 "method": "bdev_nvme_set_options", 00:37:56.574 "params": { 00:37:56.574 "action_on_timeout": "none", 00:37:56.574 "timeout_us": 0, 00:37:56.574 "timeout_admin_us": 0, 00:37:56.574 "keep_alive_timeout_ms": 10000, 00:37:56.574 "arbitration_burst": 0, 00:37:56.574 "low_priority_weight": 0, 00:37:56.574 "medium_priority_weight": 0, 00:37:56.574 "high_priority_weight": 0, 00:37:56.574 "nvme_adminq_poll_period_us": 10000, 00:37:56.574 "nvme_ioq_poll_period_us": 0, 00:37:56.574 "io_queue_requests": 512, 00:37:56.574 "delay_cmd_submit": true, 00:37:56.574 "transport_retry_count": 4, 00:37:56.574 "bdev_retry_count": 3, 00:37:56.574 "transport_ack_timeout": 0, 00:37:56.574 "ctrlr_loss_timeout_sec": 0, 00:37:56.574 "reconnect_delay_sec": 0, 00:37:56.574 "fast_io_fail_timeout_sec": 0, 00:37:56.574 "disable_auto_failback": false, 00:37:56.574 "generate_uuids": false, 00:37:56.574 "transport_tos": 0, 00:37:56.574 "nvme_error_stat": false, 00:37:56.574 "rdma_srq_size": 0, 00:37:56.574 "io_path_stat": false, 00:37:56.574 "allow_accel_sequence": false, 00:37:56.574 "rdma_max_cq_size": 0, 00:37:56.574 "rdma_cm_event_timeout_ms": 0, 00:37:56.574 "dhchap_digests": [ 00:37:56.574 "sha256", 00:37:56.574 "sha384", 00:37:56.574 "sha512" 00:37:56.574 ], 00:37:56.574 "dhchap_dhgroups": [ 00:37:56.574 "null", 00:37:56.574 "ffdhe2048", 00:37:56.574 "ffdhe3072", 00:37:56.574 "ffdhe4096", 00:37:56.574 "ffdhe6144", 00:37:56.574 "ffdhe8192" 00:37:56.574 ] 00:37:56.574 } 00:37:56.574 }, 00:37:56.574 { 00:37:56.574 "method": "bdev_nvme_attach_controller", 00:37:56.574 "params": { 00:37:56.574 "name": "nvme0", 00:37:56.574 "trtype": "TCP", 00:37:56.574 "adrfam": "IPv4", 00:37:56.574 "traddr": "127.0.0.1", 00:37:56.574 "trsvcid": "4420", 00:37:56.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:56.574 "prchk_reftag": false, 00:37:56.574 "prchk_guard": false, 00:37:56.574 "ctrlr_loss_timeout_sec": 0, 00:37:56.574 "reconnect_delay_sec": 0, 00:37:56.574 "fast_io_fail_timeout_sec": 0, 00:37:56.574 "psk": "key0", 00:37:56.574 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:56.574 "hdgst": false, 00:37:56.574 "ddgst": false, 00:37:56.574 "multipath": "multipath" 00:37:56.574 } 00:37:56.574 }, 00:37:56.574 { 00:37:56.574 "method": "bdev_nvme_set_hotplug", 00:37:56.574 "params": { 00:37:56.574 "period_us": 100000, 00:37:56.574 "enable": false 00:37:56.574 } 00:37:56.574 }, 00:37:56.574 { 00:37:56.574 "method": "bdev_wait_for_examine" 00:37:56.574 } 00:37:56.574 ] 00:37:56.574 }, 00:37:56.574 { 00:37:56.574 "subsystem": "nbd", 00:37:56.574 "config": [] 00:37:56.574 } 00:37:56.574 ] 00:37:56.574 }' 00:37:56.574 09:23:22 keyring_file -- keyring/file.sh@115 -- # killprocess 1025959 00:37:56.574 09:23:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1025959 ']' 00:37:56.574 09:23:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1025959 00:37:56.574 09:23:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025959 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025959' 00:37:56.835 killing process with pid 1025959 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@973 -- # kill 1025959 00:37:56.835 Received shutdown signal, test time was about 1.000000 seconds 00:37:56.835 00:37:56.835 Latency(us) 00:37:56.835 [2024-11-20T08:23:22.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:56.835 [2024-11-20T08:23:22.364Z] =================================================================================================================== 00:37:56.835 [2024-11-20T08:23:22.364Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@978 -- # wait 1025959 00:37:56.835 09:23:22 keyring_file -- keyring/file.sh@118 -- # bperfpid=1027763 00:37:56.835 09:23:22 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1027763 /var/tmp/bperf.sock 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1027763 ']' 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.835 09:23:22 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:56.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.835 09:23:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:56.835 09:23:22 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:56.835 "subsystems": [ 00:37:56.835 { 00:37:56.835 "subsystem": "keyring", 00:37:56.835 "config": [ 00:37:56.835 { 00:37:56.835 "method": "keyring_file_add_key", 00:37:56.835 "params": { 00:37:56.835 "name": "key0", 00:37:56.835 "path": "/tmp/tmp.K2KHZoI8N5" 00:37:56.835 } 00:37:56.835 }, 00:37:56.835 { 00:37:56.835 "method": "keyring_file_add_key", 00:37:56.835 "params": { 00:37:56.835 "name": "key1", 00:37:56.835 "path": "/tmp/tmp.euf1dPXFMZ" 00:37:56.835 } 00:37:56.835 } 00:37:56.835 ] 00:37:56.835 }, 00:37:56.835 { 00:37:56.835 "subsystem": "iobuf", 00:37:56.835 "config": [ 00:37:56.835 { 00:37:56.835 "method": "iobuf_set_options", 00:37:56.835 "params": { 00:37:56.835 "small_pool_count": 8192, 00:37:56.835 "large_pool_count": 1024, 00:37:56.835 "small_bufsize": 8192, 00:37:56.835 "large_bufsize": 135168, 00:37:56.835 "enable_numa": false 00:37:56.835 } 00:37:56.835 } 00:37:56.835 ] 00:37:56.835 }, 00:37:56.836 { 00:37:56.836 "subsystem": "sock", 00:37:56.836 "config": [ 00:37:56.836 { 00:37:56.836 "method": "sock_set_default_impl", 00:37:56.836 "params": { 00:37:56.836 "impl_name": "posix" 00:37:56.836 } 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "method": "sock_impl_set_options", 00:37:56.836 "params": { 00:37:56.836 "impl_name": "ssl", 00:37:56.836 "recv_buf_size": 4096, 00:37:56.836 "send_buf_size": 4096, 00:37:56.836 "enable_recv_pipe": true, 00:37:56.836 "enable_quickack": false, 00:37:56.836 "enable_placement_id": 0, 00:37:56.836 "enable_zerocopy_send_server": true, 00:37:56.836 "enable_zerocopy_send_client": false, 00:37:56.836 "zerocopy_threshold": 0, 00:37:56.836 "tls_version": 0, 00:37:56.836 "enable_ktls": false 00:37:56.836 } 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "method": "sock_impl_set_options", 00:37:56.836 "params": { 00:37:56.836 "impl_name": "posix", 00:37:56.836 "recv_buf_size": 2097152, 00:37:56.836 "send_buf_size": 2097152, 00:37:56.836 "enable_recv_pipe": true, 00:37:56.836 "enable_quickack": false, 00:37:56.836 "enable_placement_id": 0, 00:37:56.836 "enable_zerocopy_send_server": true, 00:37:56.836 "enable_zerocopy_send_client": false, 00:37:56.836 "zerocopy_threshold": 0, 00:37:56.836 "tls_version": 0, 00:37:56.836 "enable_ktls": false 00:37:56.836 } 00:37:56.836 } 00:37:56.836 ] 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "subsystem": "vmd", 00:37:56.836 "config": [] 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "subsystem": "accel", 00:37:56.836 "config": [ 00:37:56.836 { 00:37:56.836 "method": "accel_set_options", 00:37:56.836 "params": { 00:37:56.836 "small_cache_size": 128, 00:37:56.836 "large_cache_size": 16, 00:37:56.836 "task_count": 2048, 00:37:56.836 "sequence_count": 2048, 00:37:56.836 "buf_count": 2048 00:37:56.836 } 00:37:56.836 } 00:37:56.836 ] 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "subsystem": "bdev", 00:37:56.836 "config": [ 00:37:56.836 { 00:37:56.836 "method": "bdev_set_options", 00:37:56.836 "params": { 00:37:56.836 "bdev_io_pool_size": 65535, 00:37:56.836 "bdev_io_cache_size": 256, 00:37:56.836 "bdev_auto_examine": true, 00:37:56.836 "iobuf_small_cache_size": 128, 00:37:56.836 "iobuf_large_cache_size": 16 00:37:56.836 } 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "method": "bdev_raid_set_options", 00:37:56.836 "params": { 00:37:56.836 "process_window_size_kb": 1024, 00:37:56.836 "process_max_bandwidth_mb_sec": 0 00:37:56.836 } 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "method": "bdev_iscsi_set_options", 00:37:56.836 "params": { 00:37:56.836 "timeout_sec": 30 00:37:56.836 } 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "method": "bdev_nvme_set_options", 00:37:56.836 "params": { 00:37:56.836 "action_on_timeout": "none", 00:37:56.836 "timeout_us": 0, 00:37:56.836 "timeout_admin_us": 0, 00:37:56.836 "keep_alive_timeout_ms": 10000, 00:37:56.836 "arbitration_burst": 0, 00:37:56.836 "low_priority_weight": 0, 00:37:56.836 "medium_priority_weight": 0, 00:37:56.836 "high_priority_weight": 0, 00:37:56.836 "nvme_adminq_poll_period_us": 10000, 00:37:56.836 "nvme_ioq_poll_period_us": 0, 00:37:56.836 "io_queue_requests": 512, 00:37:56.836 "delay_cmd_submit": true, 00:37:56.836 "transport_retry_count": 4, 00:37:56.836 "bdev_retry_count": 3, 00:37:56.836 "transport_ack_timeout": 0, 00:37:56.836 "ctrlr_loss_timeout_sec": 0, 00:37:56.836 "reconnect_delay_sec": 0, 00:37:56.836 "fast_io_fail_timeout_sec": 0, 00:37:56.836 "disable_auto_failback": false, 00:37:56.836 "generate_uuids": false, 00:37:56.836 "transport_tos": 0, 00:37:56.836 "nvme_error_stat": false, 00:37:56.836 "rdma_srq_size": 0, 00:37:56.836 "io_path_stat": false, 00:37:56.836 "allow_accel_sequence": false, 00:37:56.836 "rdma_max_cq_size": 0, 00:37:56.836 "rdma_cm_event_timeout_ms": 0, 00:37:56.836 "dhchap_digests": [ 00:37:56.836 "sha256", 00:37:56.836 "sha384", 00:37:56.836 "sha512" 00:37:56.836 ], 00:37:56.836 "dhchap_dhgroups": [ 00:37:56.836 "null", 00:37:56.836 "ffdhe2048", 00:37:56.836 "ffdhe3072", 00:37:56.836 "ffdhe4096", 00:37:56.836 "ffdhe6144", 00:37:56.836 "ffdhe8192" 00:37:56.836 ] 00:37:56.836 } 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "method": "bdev_nvme_attach_controller", 00:37:56.836 "params": { 00:37:56.836 "name": "nvme0", 00:37:56.836 "trtype": "TCP", 00:37:56.836 "adrfam": "IPv4", 00:37:56.836 "traddr": "127.0.0.1", 00:37:56.836 "trsvcid": "4420", 00:37:56.836 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:56.836 "prchk_reftag": false, 00:37:56.836 "prchk_guard": false, 00:37:56.836 "ctrlr_loss_timeout_sec": 0, 00:37:56.836 "reconnect_delay_sec": 0, 00:37:56.836 "fast_io_fail_timeout_sec": 0, 00:37:56.836 "psk": "key0", 00:37:56.836 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:56.836 "hdgst": false, 00:37:56.836 "ddgst": false, 00:37:56.836 "multipath": "multipath" 00:37:56.836 } 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "method": "bdev_nvme_set_hotplug", 00:37:56.836 "params": { 00:37:56.836 "period_us": 100000, 00:37:56.836 "enable": false 00:37:56.836 } 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "method": "bdev_wait_for_examine" 00:37:56.836 } 00:37:56.836 ] 00:37:56.836 }, 00:37:56.836 { 00:37:56.836 "subsystem": "nbd", 00:37:56.836 "config": [] 00:37:56.836 } 00:37:56.836 ] 00:37:56.836 }' 00:37:56.836 [2024-11-20 09:23:22.310235] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:37:56.836 [2024-11-20 09:23:22.310292] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027763 ] 00:37:57.097 [2024-11-20 09:23:22.393031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.097 [2024-11-20 09:23:22.420496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:57.097 [2024-11-20 09:23:22.563333] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:57.668 09:23:23 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.668 09:23:23 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:57.668 09:23:23 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:57.668 09:23:23 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:57.668 09:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:57.929 09:23:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:57.929 09:23:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:57.930 09:23:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:57.930 09:23:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:57.930 09:23:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:57.930 09:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:57.930 09:23:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:58.190 09:23:23 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:58.190 09:23:23 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:58.190 09:23:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:58.190 09:23:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:58.190 09:23:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:58.190 09:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:58.190 09:23:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:58.190 09:23:23 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:58.190 09:23:23 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:58.190 09:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:58.190 09:23:23 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:58.451 09:23:23 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:58.451 09:23:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:58.451 09:23:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.K2KHZoI8N5 /tmp/tmp.euf1dPXFMZ 00:37:58.451 09:23:23 keyring_file -- keyring/file.sh@20 -- # killprocess 1027763 00:37:58.451 09:23:23 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1027763 ']' 00:37:58.451 09:23:23 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1027763 00:37:58.451 09:23:23 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:58.451 09:23:23 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:58.451 09:23:23 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1027763 00:37:58.451 09:23:23 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:58.451 09:23:23 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:58.451 09:23:23 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1027763' 00:37:58.451 killing process with pid 1027763 00:37:58.451 09:23:23 keyring_file -- common/autotest_common.sh@973 -- # kill 1027763 00:37:58.451 Received shutdown signal, test time was about 1.000000 seconds 00:37:58.451 00:37:58.451 Latency(us) 00:37:58.451 [2024-11-20T08:23:23.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.451 [2024-11-20T08:23:23.980Z] =================================================================================================================== 00:37:58.451 [2024-11-20T08:23:23.980Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:58.451 09:23:23 keyring_file -- common/autotest_common.sh@978 -- # wait 1027763 00:37:58.711 09:23:23 keyring_file -- keyring/file.sh@21 -- # killprocess 1025937 00:37:58.711 09:23:23 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1025937 ']' 00:37:58.711 09:23:23 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1025937 00:37:58.711 09:23:23 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:58.711 09:23:23 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:58.711 09:23:23 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025937 00:37:58.711 09:23:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:58.711 09:23:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:58.711 09:23:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025937' 00:37:58.711 killing process with pid 1025937 00:37:58.711 09:23:24 keyring_file -- common/autotest_common.sh@973 -- # kill 1025937 00:37:58.711 09:23:24 keyring_file -- common/autotest_common.sh@978 -- # wait 1025937 00:37:58.971 00:37:58.971 real 0m12.035s 00:37:58.971 user 0m28.953s 00:37:58.971 sys 0m2.757s 00:37:58.971 09:23:24 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:58.971 09:23:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:58.971 ************************************ 00:37:58.971 END TEST keyring_file 00:37:58.971 ************************************ 00:37:58.971 09:23:24 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:37:58.971 09:23:24 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:58.971 09:23:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:58.971 09:23:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:58.971 09:23:24 -- common/autotest_common.sh@10 -- # set +x 00:37:58.971 ************************************ 00:37:58.971 START TEST keyring_linux 00:37:58.972 ************************************ 00:37:58.972 09:23:24 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:58.972 Joined session keyring: 554204765 00:37:58.972 * Looking for test storage... 00:37:58.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:58.972 09:23:24 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:58.972 09:23:24 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:37:58.972 09:23:24 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:59.233 09:23:24 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:59.233 09:23:24 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:59.234 09:23:24 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:59.234 09:23:24 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:59.234 09:23:24 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:59.234 09:23:24 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:59.234 09:23:24 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:59.234 09:23:24 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:59.234 09:23:24 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:59.234 09:23:24 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.234 --rc genhtml_branch_coverage=1 00:37:59.234 --rc genhtml_function_coverage=1 00:37:59.234 --rc genhtml_legend=1 00:37:59.234 --rc geninfo_all_blocks=1 00:37:59.234 --rc geninfo_unexecuted_blocks=1 00:37:59.234 00:37:59.234 ' 00:37:59.234 09:23:24 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.234 --rc genhtml_branch_coverage=1 00:37:59.234 --rc genhtml_function_coverage=1 00:37:59.234 --rc genhtml_legend=1 00:37:59.234 --rc geninfo_all_blocks=1 00:37:59.234 --rc geninfo_unexecuted_blocks=1 00:37:59.234 00:37:59.234 ' 00:37:59.234 09:23:24 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.234 --rc genhtml_branch_coverage=1 00:37:59.234 --rc genhtml_function_coverage=1 00:37:59.234 --rc genhtml_legend=1 00:37:59.234 --rc geninfo_all_blocks=1 00:37:59.234 --rc geninfo_unexecuted_blocks=1 00:37:59.234 00:37:59.234 ' 00:37:59.234 09:23:24 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.234 --rc genhtml_branch_coverage=1 00:37:59.234 --rc genhtml_function_coverage=1 00:37:59.234 --rc genhtml_legend=1 00:37:59.234 --rc geninfo_all_blocks=1 00:37:59.234 --rc geninfo_unexecuted_blocks=1 00:37:59.234 00:37:59.234 ' 00:37:59.234 09:23:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:59.234 09:23:24 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:59.234 09:23:24 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:59.234 09:23:24 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.234 09:23:24 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.234 09:23:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.234 09:23:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.234 09:23:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.234 09:23:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:59.234 09:23:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:59.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:59.234 09:23:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:59.234 09:23:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:59.234 09:23:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:59.234 09:23:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:59.234 09:23:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:59.234 09:23:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:59.234 /tmp/:spdk-test:key0 00:37:59.234 09:23:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:59.234 09:23:24 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:59.234 09:23:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:59.234 /tmp/:spdk-test:key1 00:37:59.234 09:23:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1028217 00:37:59.234 09:23:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1028217 00:37:59.234 09:23:24 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:59.234 09:23:24 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1028217 ']' 00:37:59.234 09:23:24 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.234 09:23:24 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:59.234 09:23:24 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:59.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:59.234 09:23:24 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:59.234 09:23:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:59.235 [2024-11-20 09:23:24.718323] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:37:59.235 [2024-11-20 09:23:24.718406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028217 ] 00:37:59.495 [2024-11-20 09:23:24.807366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.495 [2024-11-20 09:23:24.842236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.072 09:23:25 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:00.072 09:23:25 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:00.072 09:23:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:00.072 09:23:25 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.072 09:23:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:00.072 [2024-11-20 09:23:25.503117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:00.072 null0 00:38:00.072 [2024-11-20 09:23:25.535179] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:00.072 [2024-11-20 09:23:25.535542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:00.072 09:23:25 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.072 09:23:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:00.072 849285214 00:38:00.072 09:23:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:00.072 607912299 00:38:00.072 09:23:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1028534 00:38:00.072 09:23:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1028534 /var/tmp/bperf.sock 00:38:00.072 09:23:25 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:00.072 09:23:25 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1028534 ']' 00:38:00.072 09:23:25 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:00.072 09:23:25 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:00.072 09:23:25 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:00.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:00.073 09:23:25 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:00.073 09:23:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:00.333 [2024-11-20 09:23:25.614189] Starting SPDK v25.01-pre git sha1 17ebaf46f / DPDK 24.03.0 initialization... 00:38:00.333 [2024-11-20 09:23:25.614239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028534 ] 00:38:00.333 [2024-11-20 09:23:25.695186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.333 [2024-11-20 09:23:25.724828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:00.903 09:23:26 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:00.903 09:23:26 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:00.903 09:23:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:00.903 09:23:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:01.162 09:23:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:01.162 09:23:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:01.422 09:23:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:01.422 09:23:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:01.682 [2024-11-20 09:23:26.948924] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:01.682 nvme0n1 00:38:01.682 09:23:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:01.682 09:23:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:01.682 09:23:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:01.682 09:23:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:01.682 09:23:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:01.682 09:23:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:01.944 09:23:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:01.944 09:23:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:01.944 09:23:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:01.944 09:23:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:01.944 09:23:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:01.944 09:23:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:01.944 09:23:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:01.944 09:23:27 keyring_linux -- keyring/linux.sh@25 -- # sn=849285214 00:38:01.944 09:23:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:01.944 09:23:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:01.944 09:23:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 849285214 == \8\4\9\2\8\5\2\1\4 ]] 00:38:01.944 09:23:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 849285214 00:38:01.944 09:23:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:01.944 09:23:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:02.206 Running I/O for 1 seconds... 00:38:03.229 24622.00 IOPS, 96.18 MiB/s 00:38:03.229 Latency(us) 00:38:03.229 [2024-11-20T08:23:28.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.229 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:03.229 nvme0n1 : 1.01 24621.94 96.18 0.00 0.00 5183.15 1911.47 6389.76 00:38:03.229 [2024-11-20T08:23:28.758Z] =================================================================================================================== 00:38:03.229 [2024-11-20T08:23:28.758Z] Total : 24621.94 96.18 0.00 0.00 5183.15 1911.47 6389.76 00:38:03.229 { 00:38:03.229 "results": [ 00:38:03.229 { 00:38:03.229 "job": "nvme0n1", 00:38:03.229 "core_mask": "0x2", 00:38:03.229 "workload": "randread", 00:38:03.229 "status": "finished", 00:38:03.229 "queue_depth": 128, 00:38:03.229 "io_size": 4096, 00:38:03.229 "runtime": 1.005201, 00:38:03.229 "iops": 24621.941283385113, 00:38:03.230 "mibps": 96.1794581382231, 00:38:03.230 "io_failed": 0, 00:38:03.230 "io_timeout": 0, 00:38:03.230 "avg_latency_us": 5183.15176942761, 00:38:03.230 "min_latency_us": 1911.4666666666667, 00:38:03.230 "max_latency_us": 6389.76 00:38:03.230 } 00:38:03.230 ], 00:38:03.230 "core_count": 1 00:38:03.230 } 00:38:03.230 09:23:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:03.230 09:23:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:03.230 09:23:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:03.230 09:23:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:03.230 09:23:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:03.230 09:23:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:03.230 09:23:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:03.230 09:23:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:03.490 09:23:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:03.490 09:23:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:03.490 09:23:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:03.490 09:23:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:03.490 09:23:28 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:03.490 09:23:28 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:03.490 09:23:28 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:03.490 09:23:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:03.490 09:23:28 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:03.490 09:23:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:03.490 09:23:28 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:03.490 09:23:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:03.750 [2024-11-20 09:23:29.044499] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:03.750 [2024-11-20 09:23:29.044656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138e480 (107): Transport endpoint is not connected 00:38:03.750 [2024-11-20 09:23:29.045652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138e480 (9): Bad file descriptor 00:38:03.750 [2024-11-20 09:23:29.046654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:03.750 [2024-11-20 09:23:29.046662] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:03.750 [2024-11-20 09:23:29.046667] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:03.750 [2024-11-20 09:23:29.046674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:03.750 request: 00:38:03.750 { 00:38:03.750 "name": "nvme0", 00:38:03.750 "trtype": "tcp", 00:38:03.750 "traddr": "127.0.0.1", 00:38:03.750 "adrfam": "ipv4", 00:38:03.750 "trsvcid": "4420", 00:38:03.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:03.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:03.750 "prchk_reftag": false, 00:38:03.750 "prchk_guard": false, 00:38:03.750 "hdgst": false, 00:38:03.750 "ddgst": false, 00:38:03.750 "psk": ":spdk-test:key1", 00:38:03.750 "allow_unrecognized_csi": false, 00:38:03.750 "method": "bdev_nvme_attach_controller", 00:38:03.750 "req_id": 1 00:38:03.750 } 00:38:03.750 Got JSON-RPC error response 00:38:03.750 response: 00:38:03.750 { 00:38:03.750 "code": -5, 00:38:03.750 "message": "Input/output error" 00:38:03.750 } 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@33 -- # sn=849285214 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 849285214 00:38:03.750 1 links removed 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@33 -- # sn=607912299 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 607912299 00:38:03.750 1 links removed 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1028534 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1028534 ']' 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1028534 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1028534 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1028534' 00:38:03.750 killing process with pid 1028534 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 1028534 00:38:03.750 Received shutdown signal, test time was about 1.000000 seconds 00:38:03.750 00:38:03.750 Latency(us) 00:38:03.750 [2024-11-20T08:23:29.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.750 [2024-11-20T08:23:29.279Z] =================================================================================================================== 00:38:03.750 [2024-11-20T08:23:29.279Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 1028534 00:38:03.750 09:23:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1028217 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1028217 ']' 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1028217 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:03.750 09:23:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1028217 00:38:04.010 09:23:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:04.010 09:23:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:04.010 09:23:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1028217' 00:38:04.010 killing process with pid 1028217 00:38:04.010 09:23:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 1028217 00:38:04.010 09:23:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 1028217 00:38:04.010 00:38:04.010 real 0m5.180s 00:38:04.010 user 0m9.616s 00:38:04.010 sys 0m1.423s 00:38:04.010 09:23:29 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:04.010 09:23:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:04.010 ************************************ 00:38:04.010 END TEST keyring_linux 00:38:04.010 ************************************ 00:38:04.010 09:23:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:04.010 09:23:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:04.010 09:23:29 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:04.010 09:23:29 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:04.010 09:23:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:04.010 09:23:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:04.010 09:23:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:04.010 09:23:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:04.010 09:23:29 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:04.010 09:23:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:04.010 09:23:29 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:04.010 09:23:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:04.010 09:23:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:04.010 09:23:29 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:04.010 09:23:29 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:04.010 09:23:29 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:04.010 09:23:29 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:04.010 09:23:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:04.010 09:23:29 -- common/autotest_common.sh@10 -- # set +x 00:38:04.272 09:23:29 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:04.272 09:23:29 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:04.272 09:23:29 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:04.272 09:23:29 -- common/autotest_common.sh@10 -- # set +x 00:38:12.407 INFO: APP EXITING 00:38:12.407 INFO: killing all VMs 00:38:12.407 INFO: killing vhost app 00:38:12.407 WARN: no vhost pid file found 00:38:12.407 INFO: EXIT DONE 00:38:14.947 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:14.947 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:15.207 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:15.207 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:15.207 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:15.207 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:15.207 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:15.207 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:15.207 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:15.207 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:15.207 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:15.207 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:15.207 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:15.468 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:15.468 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:15.468 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:15.468 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:19.676 Cleaning 00:38:19.676 Removing: /var/run/dpdk/spdk0/config 00:38:19.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:19.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:19.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:19.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:19.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:19.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:19.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:19.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:19.676 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:19.676 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:19.676 Removing: /var/run/dpdk/spdk1/config 00:38:19.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:19.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:19.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:19.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:19.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:19.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:19.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:19.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:19.676 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:19.676 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:19.676 Removing: /var/run/dpdk/spdk2/config 00:38:19.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:19.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:19.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:19.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:19.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:19.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:19.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:19.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:19.676 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:19.676 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:19.676 Removing: /var/run/dpdk/spdk3/config 00:38:19.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:19.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:19.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:19.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:19.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:19.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:19.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:19.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:19.676 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:19.676 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:19.676 Removing: /var/run/dpdk/spdk4/config 00:38:19.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:19.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:19.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:19.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:19.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:19.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:19.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:19.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:19.676 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:19.676 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:19.676 Removing: /dev/shm/bdev_svc_trace.1 00:38:19.676 Removing: /dev/shm/nvmf_trace.0 00:38:19.676 Removing: /dev/shm/spdk_tgt_trace.pid450168 00:38:19.676 Removing: /var/run/dpdk/spdk0 00:38:19.676 Removing: /var/run/dpdk/spdk1 00:38:19.676 Removing: /var/run/dpdk/spdk2 00:38:19.676 Removing: /var/run/dpdk/spdk3 00:38:19.676 Removing: /var/run/dpdk/spdk4 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1000585 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1002088 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1004344 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1005814 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1015780 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1016428 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1017103 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1020041 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1020485 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1021062 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1025937 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1025959 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1027763 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1028217 00:38:19.676 Removing: /var/run/dpdk/spdk_pid1028534 00:38:19.676 Removing: /var/run/dpdk/spdk_pid448673 00:38:19.676 Removing: /var/run/dpdk/spdk_pid450168 00:38:19.676 Removing: /var/run/dpdk/spdk_pid451018 00:38:19.676 Removing: /var/run/dpdk/spdk_pid452057 00:38:19.676 Removing: /var/run/dpdk/spdk_pid452397 00:38:19.676 Removing: /var/run/dpdk/spdk_pid453458 00:38:19.676 Removing: /var/run/dpdk/spdk_pid453690 00:38:19.676 Removing: /var/run/dpdk/spdk_pid453934 00:38:19.676 Removing: /var/run/dpdk/spdk_pid455071 00:38:19.676 Removing: /var/run/dpdk/spdk_pid455761 00:38:19.676 Removing: /var/run/dpdk/spdk_pid456131 00:38:19.676 Removing: /var/run/dpdk/spdk_pid456476 00:38:19.676 Removing: /var/run/dpdk/spdk_pid456817 00:38:19.676 Removing: /var/run/dpdk/spdk_pid457165 00:38:19.676 Removing: /var/run/dpdk/spdk_pid457501 00:38:19.676 Removing: /var/run/dpdk/spdk_pid457853 00:38:19.676 Removing: /var/run/dpdk/spdk_pid458222 00:38:19.676 Removing: /var/run/dpdk/spdk_pid459318 00:38:19.676 Removing: /var/run/dpdk/spdk_pid462903 00:38:19.676 Removing: /var/run/dpdk/spdk_pid463200 00:38:19.676 Removing: /var/run/dpdk/spdk_pid463406 00:38:19.676 Removing: /var/run/dpdk/spdk_pid463646 00:38:19.676 Removing: /var/run/dpdk/spdk_pid464028 00:38:19.676 Removing: /var/run/dpdk/spdk_pid464349 00:38:19.676 Removing: /var/run/dpdk/spdk_pid464726 00:38:19.676 Removing: /var/run/dpdk/spdk_pid464772 00:38:19.676 Removing: /var/run/dpdk/spdk_pid465104 00:38:19.676 Removing: /var/run/dpdk/spdk_pid465409 00:38:19.676 Removing: /var/run/dpdk/spdk_pid465482 00:38:19.676 Removing: /var/run/dpdk/spdk_pid465816 00:38:19.676 Removing: /var/run/dpdk/spdk_pid466261 00:38:19.676 Removing: /var/run/dpdk/spdk_pid466609 00:38:19.676 Removing: /var/run/dpdk/spdk_pid466950 00:38:19.676 Removing: /var/run/dpdk/spdk_pid471542 00:38:19.676 Removing: /var/run/dpdk/spdk_pid476927 00:38:19.676 Removing: /var/run/dpdk/spdk_pid489586 00:38:19.676 Removing: /var/run/dpdk/spdk_pid490274 00:38:19.676 Removing: /var/run/dpdk/spdk_pid495584 00:38:19.676 Removing: /var/run/dpdk/spdk_pid496020 00:38:19.676 Removing: /var/run/dpdk/spdk_pid501099 00:38:19.676 Removing: /var/run/dpdk/spdk_pid508181 00:38:19.676 Removing: /var/run/dpdk/spdk_pid511364 00:38:19.676 Removing: /var/run/dpdk/spdk_pid524131 00:38:19.676 Removing: /var/run/dpdk/spdk_pid535051 00:38:19.676 Removing: /var/run/dpdk/spdk_pid537563 00:38:19.676 Removing: /var/run/dpdk/spdk_pid538797 00:38:19.676 Removing: /var/run/dpdk/spdk_pid559665 00:38:19.676 Removing: /var/run/dpdk/spdk_pid564544 00:38:19.676 Removing: /var/run/dpdk/spdk_pid621031 00:38:19.676 Removing: /var/run/dpdk/spdk_pid627439 00:38:19.677 Removing: /var/run/dpdk/spdk_pid634489 00:38:19.677 Removing: /var/run/dpdk/spdk_pid642262 00:38:19.677 Removing: /var/run/dpdk/spdk_pid642340 00:38:19.677 Removing: /var/run/dpdk/spdk_pid643465 00:38:19.677 Removing: /var/run/dpdk/spdk_pid644488 00:38:19.677 Removing: /var/run/dpdk/spdk_pid646000 00:38:19.677 Removing: /var/run/dpdk/spdk_pid646600 00:38:19.677 Removing: /var/run/dpdk/spdk_pid646736 00:38:19.677 Removing: /var/run/dpdk/spdk_pid646937 00:38:19.677 Removing: /var/run/dpdk/spdk_pid647122 00:38:19.677 Removing: /var/run/dpdk/spdk_pid647132 00:38:19.677 Removing: /var/run/dpdk/spdk_pid648134 00:38:19.677 Removing: /var/run/dpdk/spdk_pid649136 00:38:19.677 Removing: /var/run/dpdk/spdk_pid650144 00:38:19.677 Removing: /var/run/dpdk/spdk_pid650818 00:38:19.677 Removing: /var/run/dpdk/spdk_pid650826 00:38:19.677 Removing: /var/run/dpdk/spdk_pid651161 00:38:19.677 Removing: /var/run/dpdk/spdk_pid652600 00:38:19.677 Removing: /var/run/dpdk/spdk_pid653921 00:38:19.677 Removing: /var/run/dpdk/spdk_pid663663 00:38:19.677 Removing: /var/run/dpdk/spdk_pid698311 00:38:19.677 Removing: /var/run/dpdk/spdk_pid703810 00:38:19.677 Removing: /var/run/dpdk/spdk_pid705760 00:38:19.677 Removing: /var/run/dpdk/spdk_pid707895 00:38:19.677 Removing: /var/run/dpdk/spdk_pid708234 00:38:19.677 Removing: /var/run/dpdk/spdk_pid708580 00:38:19.677 Removing: /var/run/dpdk/spdk_pid708885 00:38:19.677 Removing: /var/run/dpdk/spdk_pid709642 00:38:19.677 Removing: /var/run/dpdk/spdk_pid711847 00:38:19.677 Removing: /var/run/dpdk/spdk_pid713069 00:38:19.677 Removing: /var/run/dpdk/spdk_pid713774 00:38:19.677 Removing: /var/run/dpdk/spdk_pid716417 00:38:19.677 Removing: /var/run/dpdk/spdk_pid717202 00:38:19.937 Removing: /var/run/dpdk/spdk_pid717915 00:38:19.937 Removing: /var/run/dpdk/spdk_pid722976 00:38:19.937 Removing: /var/run/dpdk/spdk_pid730240 00:38:19.937 Removing: /var/run/dpdk/spdk_pid730241 00:38:19.937 Removing: /var/run/dpdk/spdk_pid730242 00:38:19.937 Removing: /var/run/dpdk/spdk_pid734940 00:38:19.937 Removing: /var/run/dpdk/spdk_pid745185 00:38:19.937 Removing: /var/run/dpdk/spdk_pid750002 00:38:19.937 Removing: /var/run/dpdk/spdk_pid757236 00:38:19.937 Removing: /var/run/dpdk/spdk_pid758730 00:38:19.937 Removing: /var/run/dpdk/spdk_pid760573 00:38:19.937 Removing: /var/run/dpdk/spdk_pid762090 00:38:19.937 Removing: /var/run/dpdk/spdk_pid767768 00:38:19.937 Removing: /var/run/dpdk/spdk_pid772964 00:38:19.937 Removing: /var/run/dpdk/spdk_pid777979 00:38:19.937 Removing: /var/run/dpdk/spdk_pid787641 00:38:19.937 Removing: /var/run/dpdk/spdk_pid787643 00:38:19.937 Removing: /var/run/dpdk/spdk_pid792789 00:38:19.937 Removing: /var/run/dpdk/spdk_pid793029 00:38:19.937 Removing: /var/run/dpdk/spdk_pid793361 00:38:19.937 Removing: /var/run/dpdk/spdk_pid793747 00:38:19.937 Removing: /var/run/dpdk/spdk_pid793977 00:38:19.937 Removing: /var/run/dpdk/spdk_pid799414 00:38:19.937 Removing: /var/run/dpdk/spdk_pid800240 00:38:19.937 Removing: /var/run/dpdk/spdk_pid805421 00:38:19.937 Removing: /var/run/dpdk/spdk_pid808765 00:38:19.937 Removing: /var/run/dpdk/spdk_pid815402 00:38:19.937 Removing: /var/run/dpdk/spdk_pid821829 00:38:19.937 Removing: /var/run/dpdk/spdk_pid831928 00:38:19.937 Removing: /var/run/dpdk/spdk_pid841163 00:38:19.938 Removing: /var/run/dpdk/spdk_pid841187 00:38:19.938 Removing: /var/run/dpdk/spdk_pid864347 00:38:19.938 Removing: /var/run/dpdk/spdk_pid865033 00:38:19.938 Removing: /var/run/dpdk/spdk_pid865737 00:38:19.938 Removing: /var/run/dpdk/spdk_pid866585 00:38:19.938 Removing: /var/run/dpdk/spdk_pid867585 00:38:19.938 Removing: /var/run/dpdk/spdk_pid868423 00:38:19.938 Removing: /var/run/dpdk/spdk_pid869154 00:38:19.938 Removing: /var/run/dpdk/spdk_pid869842 00:38:19.938 Removing: /var/run/dpdk/spdk_pid874929 00:38:19.938 Removing: /var/run/dpdk/spdk_pid875238 00:38:19.938 Removing: /var/run/dpdk/spdk_pid882561 00:38:19.938 Removing: /var/run/dpdk/spdk_pid882699 00:38:19.938 Removing: /var/run/dpdk/spdk_pid889847 00:38:19.938 Removing: /var/run/dpdk/spdk_pid895034 00:38:19.938 Removing: /var/run/dpdk/spdk_pid906696 00:38:19.938 Removing: /var/run/dpdk/spdk_pid907413 00:38:19.938 Removing: /var/run/dpdk/spdk_pid912466 00:38:19.938 Removing: /var/run/dpdk/spdk_pid912821 00:38:19.938 Removing: /var/run/dpdk/spdk_pid917878 00:38:19.938 Removing: /var/run/dpdk/spdk_pid924618 00:38:19.938 Removing: /var/run/dpdk/spdk_pid927682 00:38:19.938 Removing: /var/run/dpdk/spdk_pid940481 00:38:19.938 Removing: /var/run/dpdk/spdk_pid951242 00:38:19.938 Removing: /var/run/dpdk/spdk_pid953147 00:38:19.938 Removing: /var/run/dpdk/spdk_pid954232 00:38:19.938 Removing: /var/run/dpdk/spdk_pid973746 00:38:19.938 Removing: /var/run/dpdk/spdk_pid978470 00:38:19.938 Removing: /var/run/dpdk/spdk_pid981710 00:38:19.938 Removing: /var/run/dpdk/spdk_pid989410 00:38:19.938 Removing: /var/run/dpdk/spdk_pid989415 00:38:20.198 Removing: /var/run/dpdk/spdk_pid995956 00:38:20.198 Removing: /var/run/dpdk/spdk_pid998385 00:38:20.198 Clean 00:38:20.198 09:23:45 -- common/autotest_common.sh@1453 -- # return 0 00:38:20.198 09:23:45 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:20.198 09:23:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:20.198 09:23:45 -- common/autotest_common.sh@10 -- # set +x 00:38:20.198 09:23:45 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:20.198 09:23:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:20.198 09:23:45 -- common/autotest_common.sh@10 -- # set +x 00:38:20.198 09:23:45 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:20.198 09:23:45 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:20.198 09:23:45 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:20.198 09:23:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:20.198 09:23:45 -- spdk/autotest.sh@398 -- # hostname 00:38:20.198 09:23:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:20.458 geninfo: WARNING: invalid characters removed from testname! 00:38:47.027 09:24:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:48.937 09:24:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:51.477 09:24:16 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:52.857 09:24:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:54.765 09:24:19 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:56.144 09:24:21 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:58.052 09:24:23 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:58.052 09:24:23 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:58.052 09:24:23 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:58.052 09:24:23 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:58.052 09:24:23 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:58.052 09:24:23 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:58.052 + [[ -n 363215 ]] 00:38:58.052 + sudo kill 363215 00:38:58.062 [Pipeline] } 00:38:58.078 [Pipeline] // stage 00:38:58.082 [Pipeline] } 00:38:58.096 [Pipeline] // timeout 00:38:58.101 [Pipeline] } 00:38:58.114 [Pipeline] // catchError 00:38:58.118 [Pipeline] } 00:38:58.131 [Pipeline] // wrap 00:38:58.137 [Pipeline] } 00:38:58.149 [Pipeline] // catchError 00:38:58.157 [Pipeline] stage 00:38:58.159 [Pipeline] { (Epilogue) 00:38:58.172 [Pipeline] catchError 00:38:58.173 [Pipeline] { 00:38:58.184 [Pipeline] echo 00:38:58.186 Cleanup processes 00:38:58.190 [Pipeline] sh 00:38:58.474 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:58.474 1042129 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:58.488 [Pipeline] sh 00:38:58.773 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:58.773 ++ grep -v 'sudo pgrep' 00:38:58.773 ++ awk '{print $1}' 00:38:58.773 + sudo kill -9 00:38:58.773 + true 00:38:58.786 [Pipeline] sh 00:38:59.076 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:11.310 [Pipeline] sh 00:39:11.600 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:11.600 Artifacts sizes are good 00:39:11.614 [Pipeline] archiveArtifacts 00:39:11.621 Archiving artifacts 00:39:11.808 [Pipeline] sh 00:39:12.150 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:12.166 [Pipeline] cleanWs 00:39:12.177 [WS-CLEANUP] Deleting project workspace... 00:39:12.177 [WS-CLEANUP] Deferred wipeout is used... 00:39:12.186 [WS-CLEANUP] done 00:39:12.188 [Pipeline] } 00:39:12.207 [Pipeline] // catchError 00:39:12.221 [Pipeline] sh 00:39:12.511 + logger -p user.info -t JENKINS-CI 00:39:12.521 [Pipeline] } 00:39:12.536 [Pipeline] // stage 00:39:12.542 [Pipeline] } 00:39:12.556 [Pipeline] // node 00:39:12.561 [Pipeline] End of Pipeline 00:39:12.606 Finished: SUCCESS